Machine Learning

NUS SoC, 2018/2019, Semester I
Thursdays 12:00-14:00, Lecture Theatre 19


Note: This is not the current iteration of the course. Please visit http://www.comp.nus.edu.sg/~cs3244 for the most updated iteration.

This module introduces basic concepts and algorithms in machine learning and neural networks. The main reason for studying computational learning is to make better use of powerful computers to learn knowledge (or regularities) from the raw data. The ultimate objective is to build self-learning systems to relieve human from some of already-too-many programming tasks. At the end of the course, students are expected to be familiar with the theories and paradigms of computational learning, and capable of implementing basic learning systems.

N.B. We will be teaching and using the Python programming language throughout this class and Jupyter Notebook via Google Colab. We will using Python 3.x

Course Characteristics

Modular Credits: 4.

Prerequisites: (CS2010 or its equivalent) and (ST1232 or ST2131 or ST2132 or ST2334) and (MA1101R or MA1311 or MA1506) and (MA1102R or MA1505 or MA1521)

Instructors:

Teaching Assistants

Office hours are held (before and after class), but more commonly by appointment. Emails to me as a default are assumed to be public, and my replies and your anonymized email will likely be posted to IVLE. Please let me know if you do not want the contents of your email posted; I will be happy to honor your requests.

Workload

(2-1-0-3-3)

Translation:

  • 2 lecture hours per week (flipped)
  • 1 hour of tutorials (in class)
  • 3 hours for projects, assignments, fieldwork, etc. per week
  • 3 hours for preparatory work by a student per week

Class Structure

This class is a flipped class, a variant of a blended class. Typically, you’ll watch a first part of the lecture before coming to class, attending the physical in-class session (on Thursday afternoons) as a single-section “tutorial”, and then watch a subsequent video recorded lecture post-class, to further reinforce the lesson introduced in the introductory video, and in the physical in-class session.

Tutorials: There will be no tutorials for this class. As the class is flipped, the in-class lecture session will be used to cover exercises and provide “tutorial-like” reinforcement of concepts introduced in the videos. We will be using the tutorial slots for conduct consultation about student projects and assignments.

Schedule

Schedule

We note that Machine Learning is a subject with a lot of very good expertise and tutorials out there. It is best to tap on these resources, as they have good production quality and are more condensed, possibly saving you time. However, we still think in-class lecture is helpful to build better connection with the materials for certain topics.

This class will be flipped; i.e., you will be asked to watch videos on an unlisted YouTube channel explaining the concepts on your own first, and then come to class for a class-wide recitation, in which the teaching staff will guide you through pertinent exercises and reinforcement activities.

DateDescriptionDeadlines
Week 1
16 Aug
Administrivia and Paradigms of Learning
Week 2
23 Aug
Naïve Bayes and k-Nearest Neighbors
Week 3
30 Aug
Linear Classifiers
Week 4
6 Sep
Logistic Regression
Week 5
13 Sep
Bias and Variance and Overfitting Project Proposals
Week 6
20 Sep
Regularization and Validation Peer Grading of Project Proposals
Recess Week
27 Sep
Week 7
4 Oct
Midterm
Week 8
11 Oct
Neural Networks and Backpropagation Interim Reports
Week 9
18 Oct
Deep Learning Peer Grading of the Interim Reports
Week 10
25 Oct
Support Vector Machines
Week 11
1 Nov
Decision Trees and Ensembles Project Posters and Videos
Week 12
8 Nov
k Means and Expectation Maximization
Week 13
15 Nov
Machine Learning Ethics Participation on evening of 13th STePS (14 Nov 18:00-22:00)
Peer grading of the Project Posters and Videos
Project Reports
Exam Week
26 Nov
Final Assessment (Afternoon)
Grading

Grading

The grading for this class will comprise of the following continuous assessment milestones and a final exam. The final exam will be open book.

DescriptionPercentage
Midterm (4 Oct 2018, in-class)20%
Machine Learning Project30%
Weekly Python Notebook Assessments5%
Participation5%
Final Exam (26 Nov 2018, afternoon)40%
Total100%

Attendance is not mandatory, but will help with your participation grade. Participation is very helpful for your teaching staff too. Without it, we have very little idea whether you understand the material that we’ve presented or whether it’s too difficult or trivial. Giving feedback in the form of questions, discussion provides us with a better idea of what topics you enjoy and which you are not too keen on.

Academic Honesty Policy

Please note that we enforce these policies vigorously. While we hate wasting time with these problems, we have to be fair to everyone in the class, and as such, you are advised to pay attention to these rules and follow them strictly.

Collaboration is a very good thing. Students are encouraged to work together and to teach each other. On the other hand, cheating is considered a very serious offense. Please don’t do it! Concern about cheating creates an unpleasant environment for everyone. You will be automatically reported to the vice-dean of academic affairs if you are caught, no exceptions will be made for any infractions no matter how slight the offense.

So how do you draw the line between collaboration and cheating? Here’s a reasonable set of ground-rules. Failure to understand and follow these rules will constitute cheating, and will be dealt with as per University guidelines. We will be enforcing the policy vigorously and strictly.

You should already be familiar with the University’s honor code. If you haven’t yet, read it now.

The Pokémon Go Rule: This rule says that you are free to meet with fellow students(s) and discuss assignments with them. Writing on a board or shared piece of paper is acceptable during the meeting; however, you may not take any written (electronic or otherwise) record away from the meeting. This applies when the assignment is supposed to be an individual effort. After the meeting, engage in a half hour of mind-numbing activity (like catching up with your friends and family’s activities on Facebook, before starting to work on the assignment. This will assure that you are able to reconstruct what you learned from the meeting, by yourself, using your own brain. The Freedom of Information Rule: To assure that all collaboration is on the level, you must always write the name(s) of your collaborators on your assignment. Failure to adequately acknowledge your contributors is at best a lapse of professional etiquette, and at worst it is plagiarism. Plagiarism is a form of cheating.

The No-Sponge Rule: In intra-team collaboration where the group, as a whole, produces a single “product”, each member of the team must actively contribute. Members of the group have the responsibility (1) to not tolerate anyone who is putting forth no effort (being a sponge) and (2) to not let anyone who is making a good faith effort “fall through a crack” (to help weaker team members come up to speed so they can contribute). We want to know about dysfunctional group situations as early as possible. To encourage everyone to participate fully, we make sure that every student is given an opportunity to explain and justify their group’s approach.

This section on academic honesty is adapted from Surendar Chandra’s course at the University of Georgia, who in turn acknowledges Prof. Carla Ellis and Prof. Amin Vahdat at Duke University for his policy formulation. The origin of the original rule, called the Gilligan’s Island rule, is uncertain, but at least can be traced back to Prof. Dymond at York University’s use of it in 1984.

Late Submissions

All homework assignments are due to IVLE by 11:59:59 pm (Singapore time) on the due date. No exceptions without a medical certificate will be made. The following penalties will apply for late submissions:

  • late within 1 hour: 10% reduction in grade;
  • late within 5 hours: 30% reduction in grade;
  • late within 1 day: 50% reduction in grade;
  • late within 5 days: 70% reduction in grade;
  • after 5 days: zero mark.

These penalties are intentionally set severe to encourage students to turn in assignments on time. This in turns, means that your teaching staff can start and finish grading within a certain time period, and can help you get timely feedback on your work. Do not expect any type of preferential treatment if you turn in an assignment late.

Assignment return policy and regrades

Failure is success if we learn from it. Malcolm Forbes

All students have a right to question the grading of their work. If a regrade is sought for a particular milestone, this must be brought to our attention within 3 days of the return of the preliminary grades by email. Requests later than that will not be entertained without certified medical leave or school permission.

Projects

Student Projects

Credits: Much of the architecture for this course project comes from Bryan Low (NUS) and Thorsten Joachims (Cornell)

A key part of the mastery of machine learning is practicing it, outside of the formal mathematical and statistical basis for the algorithms. The student projects form an integral part of the assessment. Student teams should have 5-6 members and will be assembled by the teaching staff. There are two kinds of projects that can be done: Self-Defined Projects and Kaggle Competition Projects. Choose only one of the two.

Self-Defined Projects

The final project is intended to be a limited investigation in an area of machine learning of your choice. The purpose of the project is to enable you to study an area of your interest in greater detail in a practical way. The project can take on many forms, including but not limited to:

  1. Projects that explore the application of machine learning ideas to an interesting “real-world” problem.
  2. Projects that involve a theoretical or empirical study of aspects of a learning method or model.
  3. Projects that do an experimental, comparative study of various machine learning methods.
  4. Projects that extend or synergise with an existing project (could be from a member of your group), such as a honors year project.

Doing such a project gives you more flexibility and allows you work on something of your liking. However at the same time, this may potentially require some additional effort (depending on your problem) such as data collection or coming up with suitable baselines, and/or explicitly declaring what is being extended or novel for the scope proposed for the class. The teaching staff will take these factors into account when grading.

Kaggle Competition Projects

On the Kaggle website, you can find and choose from a number of interesting machine learning competitions. Upon joining a competition, you will be provided with a training and testing sets, and your performance will be measured with specified metrics and ranked with other competitors on the web.

Note that performance on the different metrics is not the critical factor in your grade on the project. While doing well on the competition helps, we primarily evaluate with respect to the (interesting) ideas your team employs to solve the task. While the data is easier to obtain for such a project, there is less flexibility and more emphasis on coming up with interesting methods.

Project Structure

DescriptionPercentage
Proposal2%
Interim Report3%
Final Poster Presentation10%
Final Project Write-up15%
Peer Grading (of two other projects) (counts towards Participation)
Total30%

Please propose a topic to us in your project proposal, and we will give you feedback on the feasibility. After the project proposal, you will be assigned a contact TA that you can use as a resource for questions and advice. We recommend meeting with your contact on a regular basis, so that you identify potential problems before it is too late.

Your team will submit an interim report three weeks into the project, indicating progress made. You may be selected to present the results of your project in a poster session in conjunction with 13th STePS. Your team will also have to prepare a final project report in the format of an academic paper (double columned, single spaced, with an abstract and references).

Peer Reviewing: We will be performing peer reviewing for the project phase of the course. There are several benefits to peer-reviewing. Most importantly, it helps you understand and appreciate work from other students and groups, and it provide more feedback to everybody about the projects. Peer-reviewing means that each one of you will be given a few submissions of your classmates to read and grade. This essentially involves providing some brief comments to help each other out. Please be as fair and impartial as possible during this reviewing. TAs will also evaluate the peer review and provide feedback as well. You will be graded on how well you review other projects and how insightful your comments are. This will be an integral part of the participation grade in the class.

We will be following a double-blind peer review model. This means that the reviewer does not know whose project he or she is reviewing, nor do the authors of the project know who is reviewing them. Even more, a reviewer is not allowed to disclose who he or she is reviewing. To be clear, the course staff will know the identity of everybody.

While detailed grading rubrics for the projects will be released in due course, projects will be graded along the following dimensions:

  1. Originality
  2. Relevance to course
  3. Quality of arguments (are claims supported, how convincing are the arguments you bring forward)
  4. Clarity (how clearly are goals and achievements presented)
  5. Scope/Size (in proportion to size of group)
  6. Significance (are the questions you are asking interesting)

Project Showcase

Below is a listing of the projects from this past semester. We have marked out some projects as ‘notable’ - these projects either exhibit a sufficient degree of rigor or the team has shown considerable maturity over the duration of the course.

  • Notable Generating word embeddings for Singlish to be used in sentiment analysis

By Lee Ming Liang, Sheryl Toh , Perry Wang Zhi Ming, Chew Chia Sin, Xiao Yunhan, Le Trung Hieu

“Hi, how are you?” this sentence may be simple for us to understand yet is incomprehensible for a computer. Our project aims to explore how can we model the daily language of Singapore (Singlish) using mathematical representation. One common way to represent words would be the use of vectors to capture the semantic meaning of the words. Another name such vector representation is known as the word embedding.

Since this is an unsupervised learning task, there is no practical intrinsic evaluation on how well the generated embedding can capture the semantic and syntactic meaning of the words. Therefore, we will be performing an extrinsic evaluation on the embeddings generated. We will be applying the embedding generated in sentiment analysis on sentences to see how well the word embeddings capture the semantic meaning of the Singlish words. We found that the Continuous Bag Of Words (CBOW) Word2Vec model works the best, achieving the highest F1-score on the sentiment classification task.

  • Real-world Image Recognition for Multiple Human Attributes

By Niu Yunpeng, Wang Debang, Yu Pei, Daniel Koh, Goh Wen Zhong, Tan Ying Lin

Human attributes recognition is the infrastructure of human re- identification systems. Although there has been extensive amount of research in classification of single attribute such as gender and age, it is still an open topic for a feasible approach of multiple human attributes recognition. This subject becomes more exciting in real- world unconstrained scenarios. We study some state-of-the-art works in this area and design several Convolutional Neural Network (CNN) models to recognize two attributes, gender and long/short sleeves. We also present a generic framework for multiple attributes recognition on single object.

  • Comparative Study On Diagnosing Pneumonia Using X-ray Images

By Goh Yu Jing, Karen Tee Lin, Lee Yong Jie Richard, Sun Yi Jing, Teo Jie Qi, Teo Shu Qi

Pneumonia is a type of acute respiratory infection that affects the lungs. It is the main culprit behind death of children by infectious disease worldwide. Pneumonia has many forms and they are all diagnosed through X-rays. ​(“Pneumonia,” n.d.)​. Abnormal white patches in chest X-rays where it should not be are used to differentiate between normal and pneumonic X-rays. This project aims to compare different preprocessing techniques and machine learning algorithms that will be able to diagnose pneumonia accurately. The different algorithms that we have used are the Support Vector Machine (SVM), Random Forest, Logistic Regression and a deep learning model Convolutional Neural Networks. Various preprocessing techniques like Denoising, Histogram Equaliser and Change of Image Size were used to determine the effect on the models. In addition, various overfit measures for deep learning were investigated. The dataset was obtained from K​ aggle.​ These X-rays of pediatric patients aged one to five originated from Guangzhou Women and Children’s Medical Center, Guangzhou. The performance metrics used are the weighted F1 score and the Area Under the Curve (AUC) from the Receiver Operating Curve (ROC). Overall, the Convolutional Neural Network worked the best with a F1 score of 0.82 and AUC of 0.93. Denoising and Histogram Equaliser transformation seem to work equally well on the data.

  • Toxic Comments Classification Challenge

By Lim Wei Liang, Muhammad Afif B Mohd Ali, Lee Yi Quan, Isabel Ang, Lee Jun Han, Tan Chee Wee

This project aims to come up with a model to identify toxic comments on Wikipedia from clean ones. On top of identifying toxic comments, we also wish to develop a model that distinguishes and classifies toxic comments into different categories. By investigating the effects of both traditional supervised models (Logistic Regression, Naive Bayes) as well as modern approaches (unsupervised learning, word embedding, recurrent neural network) on toxic comment classification, we manage to find some success in correctly classifying toxic comments. We also looked at how spam in training data affected our model’s performance.

  • Notable Predicting Stock volatility with News Headlines, a Comparative Analysis

By Neo Ann Qi, David Bani-Harouni, Chantal Pellegrini, Vincent Jonsson Qi, Ashley Ong Zhi Wei, Foo Yi Hao Zachary

This project aimed to generate different models trained with different representations of news headlines and compare their performance in the prediction of stock volatility. The three kinds of input representation are in the form of raw text, events and sentiment. We evaluate the strengths and weaknesses of the different features. The best accuracies were achieved when using raw text, followed by events. Using sentiment analysis results in poor accuracy. It is interesting to note that for all approaches, there exist some headlines where only one out of the three models predicted correctly. We were able to improve our overall accuracy by using a model and averaging the results of the 3 single predictions to get our final prediction. We further experimented with either using every headline of one day as a single sample or combining them into one.

  • Notable The Unbinding of Isaac: Clearing Dungeons with Dee p Q-Network By Wu Jiacheng, Zhou Yizhuo, Han Shiyang, Waise Koh, Nurul Adilah, Daniel Biro

Since DeepMind proposed playing Atari with deep reinforcement learning in 2013, many researchers have attempted to reproduce their results. In this project, we reproduce a Deep Q-Network (DQN) on “The Binding of Isaac” — a pixel-style dungeon crawler akin to an Atari game with a higher dimension action space and a higher frame resolution. We demonstrate the ability of the DQN to learn to play this game in one boss room and one monster room and sought to interpret the strategy learned by the agent. Finally, we implemented transfer learning to further improve the agent’s performance.

  • Restore the Archive - Using Neural Networks to Remove Distortions in Scanned Documents

By Junkai Yeo, Neil Brian Narido Labayna, Tan Chuan En Gabriel, Anders Helgeland Vandvik, Ding Feng Wong, Yang Sheng Lim

We are interested in restoring original document quality from scanned images containing distortions such as coffee stains, crumpling and pen marks. There are different approaches to image restoration and we will be exploring the encoder-decoder neural network, U-net and Generative Adversarial Networks (GANs) to find the best approach to the problem. As distorted images are not abundant, we developed a script to automatically generate a large dataset. After exploring the various neural networks, U-net and GANs were not able to converge to a desired network model and the encoder-decoder neural network was the only method that produced restored documents that were readable.

  • Identifying Salt Deposits Beneath the Earth’s Surface

By Chua Qiao Lin, Lai Yingchen, Lau Fei Heng Isaac, Lim Heng Yu, Poh Jie, Qian Jinna

We are trying to identify salt deposits in seismic images, because drilling into areas of salt is dangerous for oil and gas companies. The problem is an object detection problem with output of 1 (salt) or 0 (rock) for each pixel. We found that it is not the color of the pixel, but rather its color contrast with neighboring pixels that determine whether the said pixel is salt. Therefore, the key is to identify salt-rock boundaries within each image, thereby making the problem similar to an edge detection problem. Out of the several models implemented to solve the problem, a U-Net model with symmetric compression and expanding paths is the most effective in producing precise location of salt content.

  • Revolutionising the Fight Against Pneumonia with Machine Learning

By Apoorva Ullas, Lee Yan Hwa, P Srivatsa, Alex Foo Da Weng, Juliette Soler, Sarah Ng Zhao Xian

Chest x-rays are the most common tool used to diagnose pneumonia. However, understanding chest X-rays requires domain expertise and professional radiologists. We applied machine learning so that a computer can be used to detect signs of pneumonia given a chest x-ray, increasing the ease of access to resources for pneumonia detection. We successfully compared three machine learning models for this task: YOLOv3, RetinaNet and Mask RCNN. We tested and justified their effectiveness in recognising pneumonia lung opacities on chest x-rays by using mAP (mean average precision) and comparing their neural network architecture and hyperparameters. RetinaNet proved to be the most effective machine learning model, due to its Focal Loss loss function and network architecture. Ensembling different RetinaNet models proved to further improve the effectiveness. However, we also note that effectiveness is traded off with speed (inference time).

  • Rossmann Sales Prediction

By Cao Wei, Chen Penghao, Lang Yanbin, Wang Zexin, Xie Yangyang, Zhang Hanming

Predicting sales performance of retail companies is useful in making good investment decisions on the company, as well as enhance marketing strategies and inventory management. Traditional methods, such as time series analysis and linear regression, although come with simplicity, bear strong model assumptions, low flexibility, and a limited scope into the past. We would like to study how Recurrent Neural Networks (RNN) models could capture time series effects in the data. We have trained models such as Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), and Ensembled meta-model, with careful feature engineerings based on our observations of the data. We have seen promising prediction outcomes from the RNN models trained, and we also identified the limitation of Ensemble meta-model in our context.

  • Comparative Study of Supervised Learning Methods for Credit Card Fraud Detection

By Au Liang Jun, Dexter Wah Yixiang, Joycelyn Ng, Kenny Ng Jian Liang, Liew Jia Hong, Yan Hong Yao Alvin

Despite implementation of fraud analytics and Europay, MasterCard and Visa (EMV) technology, credit card fraud rates have risen over the years. ​In this study, we compare the effectiveness of supervised learning methods in detecting fraudulent credit card transactions with the use of an appropriate performance metric. ​We envision that findings from this comparative study could help credit card companies improve their fraud detection technology, and could be extended to detect other forms of fraud.

  • RSNA Pneumonia Detection Challenge Project Final Report By Anna Casserly Saltveit, Pengcheng Li, Sing Ye Chiew, Vivek Lakshmanan, Yi Zhe Ang, Yu Hui Koh

Pneumonia is a major cause of death for children worldwide, and detection can be challenging as the interpretation of signs may be clouded by the presence of other physical ailments. Hence, we propose to build a machine learning model that can detect the presence of pneumonia from chest radiographs of patients. For this computer vision task, we have successfully implemented a CNN model and fine-tuned its accuracy. This includes not only the exploration of state-of-the-art neural network techniques, but also the incorporation of medical domain knowledge. Notably, we discovered that transfer learning from a general image classification model worked surprisingly well for our specialized medical task. We also tapped into techniques like image augmentation, dropout, and feature engineering that enhanced our model accuracy. Our model predicted bounding boxes with accuracy up to 80%, over the initial threshold of 40%. We also managed to reach top 50 in stage one of the Kaggle Pneumonia competition.

  • Predicting foot traffic using weather and time data

By Celine Lim Shi-Yen, Evelyn Tan, Foo Guo Wei, Gover James Isaac, V R Soorya, Ewoud Arthur van Mourik

Foot traffic has traditionally been an important consideration for those involved in advertising, businesses and city planning. Using sensor and weather data provided by the city of Melbourne, we studied the feasibility of predicting footfall in various locations of Melbourne. We applied linear regression and neural network models to the problem, and investigated the relative importance of each feature, as well as the use of an embedding layer. Furthermore, the use of a ensemble of local models was studied. Our results showed that time was the most important feature, and we were able to achieve a reasonable accuracy with our final regression model, thus demonstrating the feasibility of this problem.

  • Creating a Pokémon Master using Reinforcement Learning

By Derrick Chua Han Sheng, Jamos Tay, Lee Guo Sheng, Loh Jia Shun, Ng Zhen Wei, Steffen Van

The continuous progress in deep reinforcement learning (DRL) has proven to be an interesting are for research. Specifically, the success in creating autonomous agents, which successfully plays various games ranging from Atari games to Go on a super-human level has inspired us to explore the extent to which this can be applied to more complex games such as Pokemon battling. In this paper, we will describe a simple, but successful RL agent that learns basic mechanics of pokemon battling through a basic meta-learning approach.

  • Notable Reading Comprehension on Lecture Notes By Nguyen Van Hoang, Lee Pei Xuan, Kevin Leonardo, Calvin Tantio, Luong Quoc Trung, Tan Joon Kai

This project explores the application of open-domain Question Answering (QA) in learning materials. We present a novel contribution, the Lecture Notes Ques- tion Answering (LNQA) dataset comprising crowd- sourced annotations of question-answer pairs based on lecture materials from various fields of study. We seek to improve the overall pipeline of performing reading comprehension over lecture notes. This involves build- ing a document retriever to find the relevant slide or page, and a document reader to identify the correct in- formation within a given document. Our experiments suggest that initializing our LNQA-based model for document reading with a pre-trained Stanford Question Answering (SQuAD) v1.1 model significantly improves model performance, compared to baselines of using the pre-trained SQuAD model, and a trained model with LNQA alone. We also achieve small baseline im- provements in document retrieval by implementing a department filter to narrow the search space.

  • Classifying Pneumonia on CXR Images

By Arsalan Cheema, Benjamin Jeffrey, He Yingxu, Ilya Dubovitsky, Tan Qin Hui, Wang Ce

Pneumonia is a serious medical condition that is often not identified immediately due to the overwhelming number of chest radiographs (CXR) that doctors have to interpret. Thus this project aims to incorporate machine learning into the interpretation process in order to prioritize and expedite the doctors diagnosis. A Convolutional Neural Network (CNN) was trained for classifying pneumonia from CXR images. The loss function was modified to penalize more on false negatives to prevent our model from letting pneumonia cases go undetected. Experiments were then conducted with different combinations of layers and a validation accuracy was used to compare the models. The best model yielded a 85.6% and 70.6% accuracy on predicting pneumonia and absence of pneumonia respectively.

  • Detecting and Classifying Lung Diseases using X-ray images

By Lu Lechuan, Ong Kuan Yang, Rhynade See Ey, Suyash Shekhar, Tan Jin Wei, Teo Ming Yi

In this project, we use Chest X-ray images to detect the presence of lung diseases and classify the images with diseases into 14 different types of lung diseases. Our goal is to build upon and improve the current best performing model - CheXNet. We did this by exploring different pre-processing techniques and experimenting with methods such as two-phase training. While we did not manage to surpass the performance of CheXNet in the end, we gained significant insight into the ways to tackle this problem and dataset.

  • Positive, Negative, or Neutral: The Deep Learning Approach to Sentiment Analysis

By Dominic Kenn Lim, Eugene Lim, Hung Pham Vu, Kelvin Ting, Ng Hung Siang, Song Wei Yang

Polarity-based sentiment analysis is a natural language task that predicts if a given sentence has positive, negative, or neutral tone. Using the IMDb Movie Reviews dataset, we compare the performance of different models, with the usage of pre-trained embeddings and transfer learning. In particular, we compared the following models:

  1. Gated Recurrent Unit (GRU)
  2. Long Short-Term Memory (LSTM)
  3. Bidirectional GRU (BiGRU)
  4. Bidirectional LSTM (BiLSTM) We found that the BiGRU had the best performance out of the 4 models. The BiGRU was used as a base to experiment with pre-trained GloVe and transfer learning with GloVe. However, GloVe did not improve the performance of BiGRU.
  • Cancer Cell Counting From Microscopic Images

By Alagappan Lakshmi, Cheng Yi Xin, Enzio Kam Hai Hong, Ho Wei Chin, Miyajima Jhoann Margarette Tristeza, Toh Chooi Ern

Cell counting is a ubiquitous yet tedious task that would greatly benefit from automation. Undoubtedly, counting the number of cells manually in a microscopic image is troublesome and prone to ambiguity. In our project, we have build a convolutional neural network (CNN) model to count cells in grayscale 8-bit microscopic images of cancer cells. The model is capable of predicting the presence of cells with decent accuracy.

  • Component Regularization for Domain-Specific Image Classification

By Goh Wei Ti, Nam Jun Jie Derek, Tay Jing Huang Elwin, Tham Shi Yuan, Yuan Yu Chuan, Zheng Shuwei Raymond

In the recent past, advancements in deep learning have brought about great improvements in image recognition. In particular, convolutional neural networks are widely used in building increasingly strong and robust image recognition applications. Food image classification is a challenging task as food images presents high variability and intrinsic deformability. To properly study the representation of different food images, we work on a subset of recipe1M which is a dataset that consists of 800 thousand of food images and 1 million recipes. In this project, we build a food image classifier by including ingredient classification in the training of the neural network. We first create a basic food image classifier without ingredient information to use as our baseline of comparison. The classifier is built with a residual neural network following the ResNet50 architecture. We then further improve on this architecture using a number of different approaches. Embedding techniques, multi-labelling and image augmentation are tested and incorporated into the model accordingly to improve the speed and accuracy of the model.

  • NBA Shots Prediction

By Gong Changda, How Si Wei, Jia Zhixin, Xu Yiqing, Yang Shuai, Zhang Hanyuan

This project aims to predict whether can shooter can make a certain shot, given a set of relevant features of a basketball shot (e.g. shot clock, game clock, number of dribbles made by the shooting player before the shot etc.). We have used K-nearest neighbour, Perceptron learning algorithm, Logistic regression, Neural Network, Random Forest and Boosting to make the prediction and evaluated the pros and cons for each method. We also cleaned the data and extracted several extra features to further improve the prediction accuracy.

  • Comparative Study of Various Machine Learning Techniques on Musculoskeletal Abnormality Detection

By Chen Sidai, Chua Zhong Lin Kane, Li Tangqing, Ren Changhao, Somesh Dev S/O Mohan

This project conducted an in-depth comparison among various machine learning techniques in the context of diagnosing diseased images from X-ray images. There are 3 paradigms of machine learning methods experimented. The first approach introduces a semi-supervised approach where an convolutional autoencoder-decoder is trained to reconstruct only the normal images and detect abnormal images by comparing the reconstruction error. Besides, the effectiveness of classic machine learning models are also studied by feeding lower dimensional features extracted by autoencoder. The last paradigm involves deep learning models including VGGnet, ResNet and DenseNet. These models are shown to be able to produce promising result and achieve an highest accuracy of 0.8365.

  • Comparative Study of Machine Learning Models for Image Classification of Fashion Items

By Joel Ang Lang Yi, Lok You Tan, Cheong Min Wei, Hu Wanqing, May Chan Shu Zhen, Kan Yip Keng

There is a wide variety of supervised machine learning algorithms, each with its inspirations and roots, advantages and disadvantages. We seek to explore these algorithms in detail to gain a deeper understanding of them and how they perform compared to each other for image classification. To do that, we compared the models’ ability to classify images of fashion items and identified which of them are best suited to be employed in several situations that required emphasis in different aspects of performance. Finally, using our acquired insights, we developed an original model JoNet-0 and achieved better accuracy than that achieved by the models we had previously implemented.

  • Prediction of Student Earnings and Loan Repayment Rates

By Amrut Prabhu, Andres Rojas, Ang Jing Zhe, Eeshan Jaiswal, Sreyans Sipani

This project aims to look at aggregate student characteristics in US colleges to build a machine learning model to estimate the predicted earnings and loan repayment rates of a given student. In order to get the most accurate results, we experimented with different models and analysed the factors leading to these results. This project is significant because nearly 40 percent of people who have taken student loans are expected to default on them by 2023, which will increase the annual number of defaulters from the current 1 million [7]. By allowing students to use our webpage to see their predicted earnings and loan repayments rates, we aim to help students with their financial planning so that they can make an informed decision about their choice of college. One of the highlights of the project is that our best model, Gradient Boosting with Regressor Chain, is able to get up to 90.76% accuracy on the entire dataset. From this model, we found out that the characteristics that have the most impact on the future earnings and repayment rates of students are their household income, loan amount and the tuition fees of the college.