|Title:||Sample-efficient learning from cooperative teachers|
Dr. Sandra Zilles
Department of Computer Science
University of Regina
Most machine learning models assume either (a) that the data fed to a learning algorithm is sampled at random from some (unknown) distribution or (b) that a learning algorithm has to be successful even for the worst possible data presentation (no matter how unlikely it is). However, in many application scenarios, e.g., when a human interacts with a learning machine, in fact "helpful" data is selected and presented to the learning machine by a "teacher".
We introduce two models of cooperative teaching and learning, showing how a learning algorithm can benefit from knowing that the data is chosen by a teacher. These new models can drastically reduce the sample size required for teaching. For instance, monomials can be taught with only two labeled data points, independent of the number of variables, whereas in standard teaching models the required number of labeled data points is exponential in the number of variables.
This theory of cooperative learning opens new ways of (a) showing inherent connections between teaching and active learning and (b) tackling a long-standing open question on sample compression.
(joint work with Steffen Lange, Robert Holte, and Martin Zinkevich)