Project

For the projects, I'd like to see that you can (i) read machine learning papers, (ii) conduct research according to the scientific method, (iii) produce a well-written, clear and well-structured 6-page scientific paper that makes a compelling argument, (iv) produce either a software implementation of an interesting idea or a theoretical result. For writing the paper, you are to use the style file of the NIPS conference. Projects are individual.



You are welcome to come up with any idea, provided it's a machine learning project. That is, I want you to try a machine learnign algorithm on an interesting dataset. Think of some cool data!

For inspiration, I recommend you look at the papers published at ICML, NIPS and the deep learning website. To give you a better idea, you can look at the list of past projects of Andrew Ng's course (similar to the ones at UBC). I also advise you to visit the pages of Geoff Hinton, Yoshua Bengio, Yann Lecun, Jason Weston, Fei Fei Li, Carlos Guestrin, Csaba Szepesvari, Michael Jordan, Daphne Koller, Francis Bach, Josh Tenenbaum, Tom Griffiths, Charles Kemp, Ruslan Salakhutdinov,..., who are all great machine learning scientists. Their papers will not only provide you with project ideas, but also show you how to write a paper properly and how to make sure your experiments back up all claims made in the paper.

I strongly advice you to read Andrew's advice on applying ML algorithms.



Evaluation

The paper must be in NIPS form as mentioned above and it must be double-blind. That is, you are not allowed to write your name on it etc. For more info, please read: NIPS reviewieng and double blind policy .



To submit the paper, email the pdf file (any other format will be automatically penalized with -20%) by December 6th, 7pm, to me. You will be penalized 10% for each hour late. In the email, the subject should be ML PROJECT. The body of the email should contain ONLY your full name and your student number.



You will get 3 papers soon after for you to review. That is, you will review anonymously (double blind) 3 papers submitted by your colleagues. The reviews are due back on December 15th. Each review should be about 1 page. It must assign marks between 1 and 10 to each paper, as suggested by NIPS instructions. For submission of your reviews, email me a PDF file of 3 pages only. The title of each page should be the paper title of the paper under review with your score in brackets. e.g. "Least Squares for Energy Prediction (7)". The rest of each page (one page per review) should be a text evaluation of the work you are reviewing. It should be based on the following NIPS criteria:

  • Quality: Is the paper technically sound? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful (and honest) about evaluating both the strengths and weaknesses of the work?
  • Clairity: Is the paper clearly written? Is it well-organized? (If not, feel free to make suggestions to improve the manuscript.) Does it adequately inform the reader? (A superbly written paper provides enough information for the expert reader to reproduce its results.)
  • Originality: Are the problems or approaches new? Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Is related work adequately referenced? We recommend that you check the proceedings of recent NIPS conferences to make sure that each paper is significantly different from papers in previous proceedings. Abstracts and links to many of the previous NIPS papers are available from http://books.nips.cc
  • Significance: Are the results important? Are other people (practitioners or researchers) likely to use these ideas or build on them? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach?



You project mark will mostly be based on the scores you get from your colleagues. I will simply play the role of chair and calibrate the scores to make sure there is no bias. I will also control 20% of the mark and this will be based on the quality of the reviews.



Projects



  • [1] Robust deconvolution of natural images using multiple captures.
  • [2] Effectiveness of Sparse Features: An Application of Sparse PCA.
  • [3] Bach to Basics: A Simple Evolutionary Approach To Music Composition.
  • [4] A Novel Transcription Factor Binding Sites Prediction Approach.
  • [5] Application of sparse coding with spatial pyramid matching for face expression classification.
  • [6] Predicting Length of Future Hospitalization Period: A Comparative Study.
  • [7] Breaking a Visual CAPTCHA.
  • [8] Adaptive Parallel Tempering MCMC.
  • [9] Democratic Echo State for Music Improvisation.
  • [10] Comparing Multifunctionality and Association Information when Classifying Oncogenes and Tumor Suppressor Genes.
  • [11] Using GPLVM for Inverse Kinematics on Non-cyclic Data.
  • [12] PAQ on MIDI.
  • [13] Transforming Auto-encoders.
  • [14] Subject-Oriented Image Classification based on Face Detection and Recognition.
  • [15] Feature Selection for the Binary Classification of Severe-Sickness Experience in HIV-Exposed but Uninfected Infants Involving Multivariate Longitudinal Dataset.
  • [16] Predicting a Meteotsunami with Recurrent Neural Network.
  • [17] Simulation-free approach for wait time prediction in a tele-queue.
  • [18] Classifying ultraconserved elements with hidden Markov model of evolutionary profiles.
  • [19] Unsupervised K-means Feature Learning for Gesture Recognition with Conductive Fur.
  • [20] Body Part Tracking with Random Forests and Particle Filters.
  • [21] EmbedViz - Graph Visualization of Learned Structured Embeddings of Knowledge Bases.
  • [22] Authorship Attribution for Ancient Greek Text Fragments Using Support Vector Machines.
  • [23] Comparing Shape Features with Statistical Features for Classification Tasks on Physiological Data.
  • [24] Closed-Form Supervised Dimensionality Reduction with Logistic Regression.
  • [25] Learning the Best K-th Channel for QoS Provisioning in Cognitive Networks.
  • [26] Experiments with Learning for NPCs in 2D shooter.
  • [27] SmartTitle: An Intelligent Subtitling System for Second Language Learning.
  • [28] Product Clustering for Online Crawlers.
  • [29] A Multi Armed Bandit Formulation of Cognitive Spectrum Access.
  • [30] Locating Error Sources in Digital Circuit Networks.
  • [31] Low-Rank Approximation for Link-Based Ranking.

Useful Links :