I am a tinkerer in my own classrooms—constantly deploying new instructional strategies, technologies, and content. I am always happy to share effective interventions with colleagues in the broader educational community, and I look for opportunities to share beautiful content with students. This general curiosity and enthusiasm is the backdrop for my engagement in two disparate leadership roles over the past three years. The first, in which I embrace and evangelize for an assessment tool named PrairieLearn, has the potential to change the way exams are administered and marked all over campus. The second, a new computer science course for non-majors called CPSC203, gives students an option to deepen their programming experience beyond an introductory course, while at the same time demonstrating some of the most elegant and satisfying theoretical results from the field.

PrairieLearn

Most recently, my interest and energy are focussed on scalable, reliable, secure assessments. This involvement is intensified given the abrupt migration to online instruction. My search for a solution to the assessment challenge led me back to my roots at the University of Illinois (UIUC), and a free and open source assessment tool called PrairieLearn (PL). As a champion of the platform, I provide demos, training, and ongoing support for interested faculty across UBC.

History

PrairieLearn is an open-source web-based assessment platform developed by faculty at UIUC. A fortunate chain of events, begun in Winter, 2018, and accelerated by Moyra Ditchfield (Director of Computing and Facilities in the UBC CS department), seeded the opportunity of building a user community of PL at UBC. Of course I was quick to jump in, because I had been a novice user of PL at UIUC (2015-2016), and because I could see its potential as a basis for meaningful evaluation of student learning at a huge scale.

Integration of new learning technology at the campus level requires development along several axes. The Faculty of Science, via the technical staff in the Department of Computer Science, has ensured that the PL platform is robust from both a performance and privacy standpoint. Their continued support in the form of a 75% FTE is a testament to the long-term enthusiasm for PL at the Faculty level. My role is to generate the interest and expertise for PL adoption among instructors from diverse departments, and to offer technical support, training, and mentoring as they migrate their content onto the platform.

To date, and together with Paul Carter (Associate Head CS) and Ditchfield, we have secured TLEF funding to support content migration for approximately seven courses over 2020-2021. So far, five courses in CS, one in EOAS, and one in PHYS are planning to use the platform in Winter 1, 2020. There is significant active interest in adoption from many departments, including Math, EOAS, Physics, Bio, Chem, Mech (APSC), and ECE (APSC). In the summer of 2020, a team of seven TAs, funded by our TLEF and supervised jointly by me and a course instructor, are building questions for five courses both within and outside CS. If the trajectory of PL at UBC matches that of UIUC, we will have thousands of students using the platform within a very short time.

Distinguishing Features

The magic and appeal of PL lie in its extreme ability to randomize questions, and in its great variety of question types. Instead of writing an assessment as a list of static questions, instructors focus their efforts on writing question generators which are closely linked to the learning objectives of a course, and which induce so much randomness that they can be re-deployed over many semesters. The analytics resulting from such consistent assessments are a firm basis for evaluating instructional interventions, and in my class I foresee robust benchmarked studies on instructional adaptations such as mastery learning, and group exams.

Consider the following question, which is a typical, computational, exam question from a data structures course:

In this problem we will investigate some properties of a full 5-ary tree. A full 5-ary tree is a rooted tree in which every node has either 0 or 5 children. In a full tree, a non-leaf node is called an internal node.

Suppose you know that a full 5-ary tree has 17 leaves. How many internal nodes does it have?

Now generalize your response to the previous part: suppose you know that a full 5-ary tree has $n$ leaves. Give an exact expression for the number of internal nodes.

There are many ways that the question can be generalized—the tree “arity”, and the number of leaves are obvious choices for randomization. But instead of asking for internal nodes given leaves, we could similarly ask for total nodes given internal nodes, or leaves given internal nodes, etc. PL allows randomization over all those possibilities, so that the likelihood of two students solving the same problem, or of even following the same processes to solve the problem, is vanishingly small. Nonetheless, students must understand the same definitions, and be able to apply their understanding in a common domain. The analytics associated with the question tell us more about student understanding, generally, than they do about rote process execution.

In light of PL, question design for summative assessment has completely changed. We can design a problem like the following:

Oh no! My sorting algorithm stopped after 12 iterations! The diagram below shows the state of the array when it stopped. Which sorting algorithm was I using?

Since the iteration number, the underlying data, and, of course, the answer, are determined randomly, the solution cannot be superficially shared among students—any attempt at solving the question requires deep evaluation of the content. The question itself is rendered as simple multiple choice, and yet I can rest assured that its integrity is not compromised, because no pair of students has the same instance.

The Anatomy of a Question

The display characteristics of questions are written using custom html tags that include many problem types. The example problem above employs integer input and also symbolic input (so that $3n+6$ is interpreted equivalently to $3(n+2)$). There are drawing features, matrix inputs, drop down selections, and Parson’s problems, in addition to the traditional multiple choice and checkbox. PL has support for code editing and it provides access to a docker container in which the problem author can deploy immediate autograding.

Random instantiation of the data for the problem is generated by a python function, and passed through to the question html using mustache templates. Anything you can generate in Python, including things like random svg and canvas graphics, can be included in the html of a question.

Community of Practice

The biggest one-time roadblock to PL adoption is content migration and question development. To address this challenge, a vital community of practice is a necessity, and I will continue serving as a host and advocate inside that community as long as PL is an effective tool for assessment of student learning at UBC. Such communities of practice have been shown to increase the adoption of active-learning classroom strategies among instructors (Tomkin, et al, 2019) and we believe that the approach will similarly reinforce assessment innovation.

Shadowing the community-building strategies of UIUC, we are supporting real-time help, brainstorming, and dialogue via chatrooms in Slack. In just the last (typical) day, TAs from various courses charged with converting content to PL have posed questions about scoring mechanisms (repeated attempts for diminishing points), assessment formation, and question-building. Those questions are answered by the community—other TAs, CS tech staff member Andrew Stec, and me.

For my courses, we are pushing the limits of the technology—designing interactive diagrams that allow students to explore characteristics of structures and algorithms, which we believe will also be useful to instructors of chemistry and biology. TAs under my supervision have already submitted pull requests against the core PL code, and have also generated a workflow that prepares student PL responses for manual grading via GradeScope. We are both users of and contributors to the platform, and as such we are active in both communities of practice.

CPSC203: Programming, Problem Solving, and Algorithms

A common refrain among students upon completion of an introductory programming course like CPSC103, is “what next?” In fact, in a survey collected from CPSC103 students during 2017W2, 67% of BSc students indicated they would be at least “somewhat interested” in enrolling in a second programming course for Non-Majors, and 40% said they would be “very interested.” Our goal in designing the new course, was to answer this demand, while simultaneously giving students an additional choice for fulfilling the new 6-of-7 Faculty of Science Breadth requirement. CPSC203, “Programming, Problem Solving, and Algorithms” gives students the skills to solve increasingly complex problems with code (Python), and at the same time develop their ability to describe and analyze the problems using abstractions. The problems we have chosen are rich, diverse, and interconnected, with applications in the arts and sciences. To my knowledge, the syllabus for this course is unique—no similar course exists.

The course was my design, from early ideation through course proposal and materials authoring. Development was supported by a Skylight Development Grant, and CS-CWSEI funds. The first offering occurred 2019W1, enrolling just 27 students. While this small class allowed us to develop and pilot the course content, it is my dream that CPSC203 is taken by thousands of UBC students every year. Current enrollment for 2020W2 is 60 students.

Course Highlights

Schedule

Unit Exploration Time Short Description Objectives
PROGRAMMING FLUENCY Handcraft 1wk Solve and design pattern puzzles inspired by traditional handcrafts and world flags. Classes, problem decomposition, iteration
  Billboard Hot 100 1wk Analyze songs and artists over time, to answer increasingly complex queries. Web scraping, Python dataframes, matplotlib
CLASSIC ALGORITHMS The Overstory 1.5wk Applications of Voronoi Diagrams. Problem complexity, algorithm efficiency
  Random Song Generator 1.5wk Markov Chains and their applications, including Page Rank, and protein folding. random number generation, music as data, dictionaries
ALGORITHM DESIGN Sudoku 2wk Represent and solve simple grid-based games. graphs, breadth-first graph search, intractability
  Road Trip Planning 2wk Use Dijkstra’s algorithm to find shortest routes between pairs of locations, contrast to Travelling Salesperson. Open Street Maps, optimization problems on graphs
  Visualizing Literature 2wk Create Harry Potter’s social network using the text of the novel. natural language processing, data cleaning, part of speech tagging, named entity recognition, networkX

(Pilot) Course website: https://cheeren.github.io/cpsc203test/

Exams: Midterm, Final

Course Proposal