All times are in Pacific Time (UTC -7)
8:45 - 9:50 DMP 301Dr. Leo Porter
Using Computing Education Research to Understand and Aid Our Students
At the core of most computing education research projects is the desire to better understand our students and identify ways of improving their outcomes. In this talk, we will examine three examples of how I have used computing education research studies to learn about computing students and evaluate interventions aimed at improving their outcomes. First, we will examine the adoption and evaluation of Peer Instruction in computer science, including the evidence that it is valued by students, improves student learning, reduces failure rates, and may contribute to improved retention of CS students. Second, we will examine our finding that a surprising number of students enter later computing courses with a level of prerequisite course proficiency below what instructors' might expect. Third, we will look at how the use of the validated Clance IP Scale has identified that a majority of our computing students struggle with Imposter Phenomenon and that female students are more likely to suffer from Imposter Phenomenon than male students. In each example, I will address important next steps for teachers as well as the broader computing education research community.
9:50 - 10:15 DMP 201
Paper Session I
10:15 - 11:40 DMP 301
Session Chair: Steve Wolfman
Evaluating the Efficacy and Impacts of Remote Pair Programming for Introductory Computer Science Students
With the increase in online learning, instructors are looking for novel ways of supporting student learning and getting students to collaborate in online environments. Pair programming allows students to brainstorm and problem-solve in teams and has been found to help with improving code design, attitudes toward computer science, productivity, and performance. However, past work has focused on face-to-face, in-person collaboration, and it is unclear whether these benefits will translate to an online context. This work replicates several studies evaluating the effects and benefits of in-person pair programming in an online environment. In an introduction to programming course, students participated in weekly online sessions where they were asked to solve a set of exercises in pairs or individually. We measure task performance and student opinions on the activities and perceptions of remote pair programming. Our study found that remote pair programming had little to no impact on the time taken, promising but not statistically significant impacts on code correctness, and statistically significant impacts on students' perceptions of both their own experience and the efficiency and efficacy of pair programming. Our findings show that some, but not all, of the benefits of pair programming can be replicated in an online context.
Cloud-based Computer Labs and Their Silver Linings
The beginning of a first programming course is often overwhelming for students, with the cognitive load of the programming itself compounded by the need to install and configure compilation tools and become accustomed to the build process. Students who develop code on their own machines might have extra difficulty if their environment does not match the environment used by their instructors or in lab sessions. This article discusses the experience of the authors using the cloud-based JupyterHub platform, which provides each student a private Linux VM, to provide a uniform environment for introductory C programming.
Long Paper - Online
A Longitudinal Evaluation of the Impact of a Graduate Student Accessibility Training on Student Learning Outcomes
This paper describes a training program designed to increase accessibility competencies in graduate students of interdisciplinary backgrounds, including those in computing education and presents a longitudinal study that examined the program’s effectiveness. We surveyed two graduate student cohorts in the program at multiple periods over eight months (N = 14). Students reported their level of program engagement (i.e., physical, emotional, and cognitive), empathy, technical knowledge, and career interests in accessibility. We found that participants’ physical engagement (i.e., the degree to which a student puts effort in working on assignments) and empathy increased over time at a marginal significance level. Students reported high to medium medians on other measurements, which imply the program successfully maintained their engagement, technical knowledge, and career interests. There is a gap in computing education with respect to accessibility training for graduate students, and our training program can be a model to consider for computing educators.
11:40 - 12:30 DMP 201
12:30 - 13:45 DMP 301Dr. Eleanor O'Rourke
Why Do Students Think They're Bad at Programming? Understanding and Supporting Motivation and Learning in CS1
In this talk, I will introduce a surprising motivational challenge in the domain of computer science education: students often think they're bad at programming, even when they are performing well. I will present a series of studies showing that students in introductory courses at the university level assess their own programming ability frequently, using criteria that are often misaligned with best practices (e.g. "I should be able to debug quickly based off a first glance"; "I shouldn't have to sit there and think"). Then, I will show how we can use these self-assessment criteria as a lens to better understand student experiences and the ways they approach learning. Finally, I will present initial research exploring how we can help through the design of both curricular interventions and intelligent programming environments that help students re-frame these experiences. Through this body of work, I will demonstrate the value of conducting research that crosses the boundaries of computer science and the learning sciences to deeply understand learning contexts and inform the design of novel learning environments.
Blizzarding the Niftys
13:45 - 14:30
Room I - DMP 301 Hybrid
Session Chair: Don Acton
Extending Canvas for a Better Student (and Instructor) Experience
The design and presentation of content can be a challenge within the Canvas LMS, including such things as limited HTML tags, minimal CSS support, embedding responsive objects and the display of syntax-highlighted code blocks. By partnering Canvas with a quick method to seamlessly display constraint-free external content you can achieve a better student experience as well as a flexible content editing experience for tech-savvy course authors.
In this Blizzard presentation, Paul Hibbitts will share key user experience insights and techniques, including the use of his open-source web app Docsify-This.net to seamlessly display Markdown content, which all contributed to a better student experience on a range of devices for his Simon Fraser University CMPT-363 User Interface Design course (https://canvas.sfu.ca/courses/76289). The techniques and open source software shown in this presentation can also be used with many other LMSs, such as Moodle and Brightspace.
Supporting lost students in CS1: a data-driven approach
As enrollments increase, the ability to support students who are struggling with the course becomes almost nonexistent for CS educators. In this blizzard talk, we reflect on our attempt to manage the support of lost students by creating a team of teaching assistants focused on identifying lost students based on LMS data and providing additional support and resources. We briefly discuss the team's major accomplishments, lessons learned, and future directions.
An Approach for Computer Science Integration in High School Math and Science
Computer Science can be taught as a standalone subject or integrated with other subjects in K-12. In Ontario, coding has recently been added to the core subjects of Math and Science from grades 1-9. In this talk we will share our resources and approach to teacher training and pedagogy using Python and Jupyter notebooks. The resources are aligned with the Grade 9 Math and Science courses in Ontario and they are free to use and modify for the classroom (csintegration.ca).
Nifty Assignment Virtual
Object Oriented Programming Exercises for CS2 Courses: Urban Forestry Theme
We have provided four programming labs designed to teach basic principles of object oriented design while exposing students to public data about the urban forest. Labs make use of data inventories from large Canadian cities as well as allometric calculations for Canadian trees. While labs have been designed to expose students to data science for public service, their primary goal to teach software design patterns and Object Oriented Design strategies. More specifically, labs address 1) creation of basic software objects in Java; 2) creation of basic Graphical User Interfaces; 3) use of the Observer design pattern; and 4) use of the Decorator design pattern. Variations of the labs in this collection have been used by approximately 450 students enrolled in a CS2 course (Introduction to Software Design) at a large North American University.
For an overview of the labs that have been included, please refer to "WCCCE_summary.pdf" in the attached Zipped file archive.
Room 2 DMP 110 (All in-person)
Session Chair: Ed Knorr
Introduction to Tools and Techniques in Computer Science: new lab courses at the University of Manitoba.
We are introducing two new 1000-level lab courses at the University of Manitoba to provide students with a hands-on introduction to working with the tools and techniques we use every day to design, develop, analyze, and maintain software.
We want to ensure that all Computer Science students entering our program have the opportunity to learn and practice tools and techniques that can help them succeed in later academic years and beyond. The tools and techniques we are introducing in these courses have traditionally been topics that students are expected to learn independently, including: writing structured text (Markdown and ), using the command line and basic shell scripting, version control, debugging strategies, and more.
Our goals for these lab courses include:
Reducing situations in courses where student success is impeded because they don’t know about or can’t use tools that will help them with their task, and can’t learn about them because of the urgency of their task. (“I can’t learn how to use a debugger because I need to finish my assignment now”).
Providing opportunities for students in under-represented groups or students with little to no computing education to learn to use tools and techniques that their peers may have learned independently or in secondary school.
Giving credit to students for their time and effort! We know that these tools and techniques are valuable, but we can’t expect students to invest time on top of what they’re already doing in classes without direct academic benefit.
LLM-Based Personal Coding Assistant
In the context of Introductory Programming, pre-trained Large Language Models like OpenAI GPT-3 and Codex which power tools like ChatGPT are capable of answering general programming questions, explaining code, or even generating the code solution to a given programming task. Currently, many universities are banning access to such generative AI tools as a way to control for over-reliance and plagiarism. We are taking a different approach and are experimenting with these tools in introductory programming courses. We attempt to control and tune the tool to generate plain English responses instead of revealing the full code solution.
We are creating a new Personal Coding Assistant that generates immediate and personalized responses designed in a way that meet the criteria of both students and educators: being helpful, technically correct, while not revealing the actual code solution. The tool includes multiple features like Ask General Question, Help Fix Code, Ask Question From Code, Help Write Code (by generating high-level pseudo-code), and Explain Code. We have used an early version of the tool in a 750-student 2nd-year C programming course and are interested in connecting with other programming instructors who would be willing to try the tool in their courses. In this Blizzard talk, we will give a short demonstration of the tool.
Assessing Computational Thinking in an Introductory Computer Science Course
There is a growing need to teach computational thinking (CT) within K-12 education as most provinces across Canada have incorporated it within their curricula. However, teachers in service still lack a basic understanding of CT and require further training in this space. Furthermore, there is an insufficient number of CT skill assessments that are validated and are not experimental. My research study includes a quasi-experimental design with a mixed-methods approach to investigate whether an undergraduate introductory CS course can promote CT skills which K-12 teachers can take as part of their professional development. Moreover, in doing so, evaluate if CT skills are associated with learning performance in this CS course. Unlike most CT-based studies, this will be an in situ study on an existing and diverse introductory CS course without any intervention. This study will produce a validated CT assessment that includes a pre- and post-test assessment to measure students' CT skills before and after their course. This assessment, paired with follow-up interviews, will allow participants with an opportunity to reflect on their learning in a course that typically does not provide reflection. In addition, this study will also provide educators and decision makers with a recommendation of whether teachers ought to take an introductory CS course in order to incorporate CT within their own subjects and classrooms.
Nifty Assignment In-person
Nifty Assignment: Viper
This assignment guides students through the creation of a snakelike game. A substantial code base is provided which includes the user interface and computer-controlled opponents. Students extend the provided code by implementing the functionality required for a human player. This assignment focuses on the use of lists (both one and two-dimensional) and the creation of new functions. It assumes that students have prior experience with elementary programming, decision making, and repetition constructs. The result of this assignment is a complete, playable game with AI players that provide an appropriate challenge for reasonably novice players.
14:30 - 15:00 DMP 201
Paper Session II
15:00 - 16:30 DMP 301
Session Chair: Jonatan Schroeder
"I am not alone in this": Experiences of Common Humanity and Social Connection from a Computer Science Course on the Metacognition of Learning
This paper introduces a Computer Science undergraduate course on metacognition and independent learning. Using an autoethnographic approach, the authors reflect upon their experiences as members of the teaching team and students in the course, to better understand the impact of this course on pedagogy, learning, and their identities as growing and established computer scientists. Student self-reflections point to the importance of fostering emotional bonds and common humanity in the classroom to facilitate learning. We discuss implications of this work on pedagogy, hiring, curriculum, and future research.
Specification and Scaffolding in Project-Based Learningof Systems Architecture
Project-based learning (PBL) through open-ended group projects is praised for fostering technical communication, collaboration, and leadership skills. We examine PBL in the group project element of Web Systems Architecture, an upper-level undergraduate systems
course. We investigate learning outcomes, team dynamics, technical communication, and confidence-building. Our observations suggest that while learning outcomes are similarly achieved with and without additional specification and scaffolding, when given a choice for receiving further specification, and scaffolding, students are inclined towards more specification and avoid taking risks in open-ended projects. Specifically, 86% of our students decided to choose the more-specified project stream, and by the end of the project, 41% of the students (58% of survey participants) indicated their preference for even a more specified project. We explore the factors influencing this choice and discuss design alternatives to further motivate risk-taking, and our initial results in using them. We also made additional observations throughout the process, including superior performance of individuals self-identifying as a gender in minority.
Applying Active Learning Techniques In Computing Courses Using Padlet Tool
By engaging students in activities that involve knowledge and understanding construction, active learning approaches help students learn more effectively. In this paper, we present two active learning techniques that were successfully implemented in different computing courses using Padlet educational tool. The first technique involves collecting feedback from students after the first month and during class time using a live platform, such as Padlet. Students’ feedback can be used to inform instruction and promote learning throughout the semester. In addition, an enhanced learning technique called "File traveling" has been implemented. Using the Padlet tool, students form small groups and share questions, feedback, and answers to help one another learn. The use of both techniques makes students more engaged and encourages them to reflect on their learning.
Short Paper - Online
Using a Two-Stage Final Exam in an Intro CS Course: Student Perceptions and Grade Impacts
We present results from testing a two-stage exam format in a small, first-year programming class (n=24), including survey responses from students (n=8) about their experience with the exam format. Students reported liking the format due to a decrease in stress, helping them to better understand course concepts, and helping to improve their grades.
DAY 2 Friday
Paper Session III
9:00 - 10:30 DMP 301
Session Chair: Charles Hepler
spy3: A Python Subset for CS1
Simple Python 3, or spy3, is a system for CS1 that filters students' Python code and limits it to a subset of the Python language matching what is taught in the course. At the same time, spy3 provides improved error diagnostics for common novice problems, along with enhanced features for CS1 use, like the ability to see an execution trace of a running Python program. A web-based turtle graphics module added to spy3 made the system feasible for remote use by students, regardless of their computing device. We describe spy3's features, its implementation, and our experience using it with a large introductory Python class.
Exploring ChatGPT’s impact on post-secondary education: A qualitative study
As Chat Generative Pre-trained Transformer (ChatGPT) gains traction, its impact on post-secondary education is increasingly being debated. This qualitative study explores the perception of students and faculty members at a research university in Canada regarding ChatGPT’s use in a post-secondary setting, focusing on how it could be incorporated and what ways instructors can respond to this technology. We present the summary of a discussion that took place in a two-hour focus group session with 40 participants from the computer science and engineering departments, and highlight issues surrounding plagiarism, assessment methods, and the appropriate use of ChatGPT. Findings suggest that students are likely to use ChatGPT, but there is a need for specific guidelines, more classroom assessments, and mandatory reporting of ChatGPT use. The study contributes to the emergent research on ChatGPT in higher education and emphasizes the importance of proactively addressing challenges and opportunities associated with ChatGPT adoption and use.
Generating CS1 Coding Questions using OpenAI
In CS1, to assess student knowledge, instructors prepare exam questions that often include code snippets. Due to the significant amount of time and effort required to create high-quality exam questions, instructors often only produce a single version of the exam. This results in all students receiving the same set of questions, which raises the possibility of plagiarism. In this paper, we propose a tool that allows computing science educators to generate a number of variations of a given code snippet, where the pedagogical intent of the code remains the same, but the code is mutated.
Short Paper - Online
Just-In-Time Prerequisite Review for a Machine Learning Course
We present a just-in-time strategy for prerequisite review in an upper-year machine learning course. This course has a range of prerequistes in math, computer science and statistics that students are expected to have completed. Historically, instructors at our institution have presented a list of prerequisit resources at the beginning of term, but some of the material in these resources is not used until late in the term. With our just-in-time strategy for prerequisite review, we tie prerequisite concepts to each lecture. Before each lecture, students are asked to complete 2-4 multiple choice review questions. A short video is provided with each question, so that if a student is unable to complete the question, they can review the relevant concept by watching the video.
10:30 - 11:00 DMP 201
11:00 - 12:00 DMP 301Dr. Mariana Silva
Innovative Online Tools to Improve Learning, Access, Equity, and Sense of Belonging
For the past three years, instructors have been challenged to find creative and efficient alternatives for teaching in remote and hybrid formats, creating an opportunity to explore innovative teaching pedagogies, such as flipped classrooms, computer-based assessments, and online testing. In this talk, I will describe how the adoption of an online assessment tool (PrairieLearn) has enabled: a) the implementation of computer-based collaborative synchronous and asynchronous activities for flipped lectures, b) the adoption of flexible deadlines allowing students to complete formative assessments at different paces, and c) the offering of online and in-person sections of the same course without sacrificing the quality of instruction. I will also discuss how these changes have impacted students' performance, satisfaction, and sense of belonging.
12:00 - 13:00 DMP 201
Paper Session IV
13:00 - 14:45 DMP 301
Session Chair: Sarah Carruthers
AI-Generated Code Not Considered Harmful
Recent developments in AI-generated code are merely the latest in a series of challenges to traditional computer science education. AI code generators, along with the plethora of available code on the Internet and sites that facilitate contract cheating are a striking contrast to the heroic notion of programmers toiling away to create artisanal code from whole cloth.
We need not interpret this to mean that more, potentially automated, policing of student assignments is necessary: automated policing of student work is already fraught with complications and ethical concerns. We argue that instructors should instead reconsider assessment design in their pedagogy in light of recent developments, with a focus on how students build knowledge, practice skills, and develop processes. How can these new tools support students and the way they learn, and support the way that computer scientists will work in the years to come? This is an opportunity to revisit how computer science is taught, how it is assessed, how we think about and present academic integrity, and the role of the computer scientist in general.
Pan-Institutional Applied Research within Undergraduate andPost Degree Diploma Teaching Programs
Many undergraduate student applied research projects are conducted within colleges and universities' computer and computing science departments. In this paper, we discuss the pan-institutional applied research projects' experiences and results from seventeen years of teaching capstone project courses, upper-level courses in Data Mining, Data Warehousing, Object Analysis and Design, Special Topic Courses in Databases, etc., in the Computer Information Systems Diploma, the Bachelor of Computer Information Systems Degree programs at Okanagan College, and the Post Degree Diploma in Data Analytics program at Langara College, and similar courses at UBC Okanagan, and the University of the Fraser Valley. Teaching a new computing science topic was synchronized through practical software development and software engineering projects with industrial sponsors and by small groups of students. In addition to industrial projects, we introduced several student-centred applied research projects from academia and industry in British Columbia and across Canada. In the last 5 years, we initiated collaboration in these student-centred projects between several post-secondary institutions (Okanagan and Langara Colleges, UBC Okanagan, and the University of the Fraser Valley) in Canada and internationally in France (University Paris-Est Creteil). Managers or executives from the sponsoring companies and instructors and professors from the educational institutions supervised and supported students as sponsors, advisors, or mediators. Many student project teams were able to develop impressive, high-quality engineering and research applications and systems. The sponsors provided positive feedback and references for most of the projects. The results of some of these industrial projects were turned into products by the sponsoring companies.
PrairieLearn in CS1: An Experience Report
In an attempt to improve students’ learning experiences in early computer science, we pilot the use of PrairieLearn, a mastery-based learning platform in a CS1 course at our institution. The following experience report outlines the benefits and challenges of integrating this platform into various course components, alongside a briefanalysis of students’ performance in the course.
Short Paper - Online
Tips for Using Gamified Real-Time Polling Quizzes as a No-Stakes Engagement Tool for Computing Courses
This paper shares the experiences of using Kahoot, a gamified real-time polling platform, as a tool for engaging students in computing courses. The paper provides practical tips for designing and implementing Kahoot quizzes in a way that promotes student participation and motivation while keeping the stakes low. Special attention is given to the psychological aspects of quiz design, including how to maintain a motivating and entertaining nature without negatively affecting student self-efficacy, and how to promote deeper learning and retention of course material. Additionally, the paper explores using Kahoot quizzes as an interactive narrative tool for content delivery in lectures.
14:45 - 15:15 DMP 201
15:15 - 16:30 DMP 301
Demystifying Alternative Grading Systems
Many instructors want their students to focus on learning instead of grades. However, our systems, structures, and policies are heavily centred around grades and the fallacies of their inherent fairness. In these settings, it is hard to expect students to keep their attention on the material and not get distracted by the frequent input of various grades. Alternatives to traditional grading, such as standards or competency-based grading, specifications-based grading, and ungrading, allow instructors to change the conversation and redirect the focus on learning.
In this interactive panel, we will begin with a description of some of the most common alternative grading practices: standards, specifications, competency, contract, portfolio, labour, and ungrading. Then, the panellists will share their experiences on adopting these alternative grading systems in courses with some implementation details. Attendees will have plenty of opportunities to ask questions and panellists will also share their experiences on how we refocused students’ attention on rich, high-quality feedback instead of grades. We will also discuss the challenges and opportunities of these systems, and facilitate a discussion on how we can start working on broader structural changes to recentre higher education on learning, rather than points and grades.
The primary goal of this session is to examine different forms of alternative grading practices that inform formative and summative assessments, which in turn impact students' motivation, self-efficacy and course success.