CS6234 : Advanced Algorithms

 

Basic Information:

 

Modular credits: 4

Prerequisite: CS5234

 

This module is aimed at graduate students who are doing or intend to do advanced research in algorithm design and analysis in all areas of computer science. The module covers advanced material on algorithms with emphasis on efficient algorithms, and explores their use in a variety of application areas. Topics covered include, but are not restricted to, linear programming, graph matching and network flows, approximation algorithms, randomised algorithms, online algorithms, local search algorithms, algorithms for large datasets. The module will be a seminar-based module that will expose students to current research in these areas.

Logistics:

 

Lectures: Every Friday 9-11 am at SR@LT19. First lecture on 18th January, 2013.

 

There will be no tutorials.

 

Office Hours: Every Friday 2-3 pm (or by appointment).

Lecturer:

 

Name : Rahul Jain

Office : S15-04-01

Email: first name at comp.nus.edu.sg

Phone: 65168826 (off).

Books:

 

We may use parts of the following texts and also material from outside these, e.g. research papers on algorithms and related areas.

 

1.      “Introduction to Algorithms” by Cormen, Leiserson, Rivest and Stein (Prentice Hall, Second Edition.)

2.      “Algorithm Design” by Jon Kleinberg and Eva Tardos. (Pearson International Edition.)

3.      “Approximation Algorithms  by Vijay. V. Vazirani. (Springer.)

4.      “Randomized Algorithms” by Rajeev Motwani and Prabhakar Raghavan. (Cambridge University Press.)

5.      “Computational Geometry”, by de Berg, Mark, O. Cheong, M. van Kreveld, and M. Overmars. (Third edition. New York, NY: Springer-Verlag, 2008. ISBN: 9783540779735).

6.      “Data streams: Algorithms and applications”, by S. Muthukrishnan. Appears in Foundations and Trends in Theoretical Computer Science, Volume 1, issue 2, 2005.

Evaluations:

 

Somebody famously said “A problem well understood is half solved.” Therefore a primary ability any researcher must develop is the ability to read and understand and absorb known body of knowledge before adding further on to it. Since this module is intended at students who wish to do research in algorithms and related areas, the primary focus of the module will not be on ‘teaching new topics in algorithms’ (this has been done in several previous courses that you have taken in your journey so far), but on ‘teaching how to learn new topics in algorithms’. Therefore the focus will be to train students in being able to read and understand advanced material. You will be expected to read, understand and present advanced topics in algorithms/ research papers in algorithms. Your grade will be determined on how well you understand and present these topics/papers. There will be no exams. Marks distribution is as follows.

 

First presentation 40 marks.

Second presentation 40 marks.

Forum discussions 20 marks.

 

Schedule:  Please click this link.

 

Presentrations:

 

1.     Topic: Machine learning.

Date: 25 January.

Group: Tian Shangxuan, He Xiangnan, Ji Kun, Guan Peiyong and Wang Hancheng.

 

 

In the presentation, the group mainly talked about off-line learning and on-line learning algorithms in machine learning. Off-line learning algorithms aim to find a target function based on some training set.  The group focused on Probably Approximately Correct (PAC) learning which can generate concepts having low error with high probability, and provides theoretical foundation for the relation between lower bound of accuracy and the size of the training set. Weak PAC learnability and a boosting approach to reduce error probability as well as Adaboost algorithm were also covered. 

 

On-line learning targets at predicting labels for instances by assuming that instances come one by one.  By adjusting the parameters (or models) on the arrival of each instance, they make the real-time machine learning tasks possible.  In this part, the group mainly introduced two online learning algorithms, namely Weighted Majority algorithm and Winnow algorithm, which are two distinct representatives of online learning from experts and examples respectively.

 

Slides of the presentation

 

Video used in the presentation

 

 

2.     Topic : Combinatorial Algorithm

Date : 1st February 2013

Group : Wangsheng, Ye Hong, Jin Yong, Wangwei and Gordon

 

 

The topic of the talk was combinatorial algorithms, specifically algorithms for bipartite graphs. The first part of the talk was on finding maximum matching in a bipartite graph. Two algorithms were presented:  a simple algorithm using augmented paths and then the more efficient Hopcroft-Karp algorithm, which worked with several vertex disjoint augmented paths at the same time.

 

The next part of the talk was on the Stable Marriage Problem. The algorithm presented was the Gales-Shapley algorithm, to find a stable matching in a bipartite graph given some preference rules. Finally, the last part of the talk was on the optimal assignment problem.  The algorithms presented were the Hungarian method and the Kuhn-Munkres algorithm.

 

Slides of the presentation

 

 

3.     Topic : Streaming Algorithms

Date : 8th February 2013

Group : : NARMADA SAMBATURU, SUBHASREE BASU, ALOK KUMAR KESHRI, RAJIV RATN SHAH, VENKATA KIRAN YEDUGUNDLA, VU VINH AN.

 

In this presentation, basic concepts of Streaming algorithms were introduced along with two techniques known as Sampling and Sketching. In Sampling, a classic Reservoir Sampling algorithm was presented where in all the elements of the data stream have equal probability of being chosen as well as retained in the final sample. To include the notion of timeliness of data that is missing in the reservoir sampling, the group introduced the concept of 'sliding window' and discussed the Priority Sampling Algorithm for sliding windows. In Sketching, two sketching methods namely Bloom Filter and Count-Min Search were presented. They are used to solve different important problems such as most frequent item in stream, finding the size of self-join on a data stream in limited space. The group presented a classical problem of counting distinct numbers in a stream along with two algorithms to solve the problem. One is the Flajolet-Martin Algorithm and the other one is an algorithm to solve the problem with optimal space complexity.

 

Slides of the presentation

 

 

4.     Topic : Linear programming and Ellipsoid algorithm
Date : 15th February 2013
Group : Seungmin Kang, Aissan Dalvandi, Le Tuan Anh, Anup Kumar Das

 

In this talk, the group presented the `ellipsoid algorithm’ to solve linear optimization problems in polynomial time. First an optimization problem is converted into a feasibility problem. The algorithm then starts with a big enough ellipsoid which is guaranteed to cover the feasible region if it exists. Then the centre of the current ellipsoid is checked to see whether it lies within the feasible region or not (that is by checking if the centre satisfies all the constraints). If it doesn’t, then a cutting hyper-plane is found which divides this ellipsoid into two halves (by shifting any violated constraint to the centre of the ellipsoid) so that one of the halves contains the feasible region. Then a new ellipsoid is formed containing the half-ellipsoid which in turn contains the feasible region. These steps are repeated until the centre of the current ellipsoid lies within the feasible region (at which point the algorithm declares `feasible’ and stops) or a predefined number of iterations is over (at which point the algorithm declares `infeasible’ and stops).

 

A proof of correctness and the running time analysis of the algorithm was presented. First the proof of correctness in a simple case (unit ball, separating hyper-plane crossing 0 etc.) was presented. Then the same method was used to prove the correctness in the general case. For running time analysis, it was shown that the ratio between the current ellipsoid and the next ellipsoid is upper bounded by exp(-1/(2(n+1))). Hence it was shown that the algorithm will terminate after 2n(n+1)ln(R/r) iterations (where n is the dimension of the problem space, R is the radius of the original ellipsoid and r is the radius of a small ellipsoid that lies inside the feasible region if it exists).

 

To understand the concepts, the group first introduced convex optimization basics with the duality theorem. Subsequently, the details of the exact algorithm and the analysis were presented. Finally, the talk was concluded by mentioning some other algorithms for linear programming.

 

Slides of the presentation

 

Video of a run of the algorithm

 

 

5.     Topic : Cryptographic algorithms
Date : 22nd February 2013
Group : Daryl, Etkin, Supartha, Rajendra and Aarthi

 

The first half of the talk focussed on the RSA algorithm, which is an implementation of a public key cryptosystem, where public/private key pairs are generated and messages are encrypted with the public key but can only be decrypted with the private key. RSA can help to create a secure (i.e. private) line of communication between two parties without the need to agree on a shared secret key. The algorithm can also be easily adapted for use in digital signatures, where messages can be signed by their sender and verified as authentic by any receiver. Various sub-algorithms are required to implement RSA, including successive-squaring for modular exponentiation and the extended Euclidean algorithm for computing the decryption exponent. However, since RSA is based on the difficulty of factorizing integers with at least one large prime factor, one of the most important sub-algorithms is that for generating very large primes efficiently. The Miller-Rabin test is one such method to do this and can be implemented as either a randomized or deterministic algorithm. The randomized algorithm turns out to be more suitable in practice.

The second half of the presentation covered Reed-Solomon codes, which are a form of error correcting codes, initially at an intuitive level before delving into the mathematics and algorithms for performing encoding and decoding. Error correcting codes enable us to have reliable communication over unreliable channels. The basic idea is to introduce redundancy in a controlled way so that we can recover the original information even in the presence of errors or even erasures. Reed-Solomon codes use polynomials over finite fields to encode the data along with parity symbols. The decoding process is a multi-step one whose ultimate goal is to completely estimate the 'error polynomial' starting from the number of errors (shown with the Peterson decoder), the positions in the received message that have an error (found with Chien search) and determining the values of those errors (with Forney's Algorithm). Operating on the error polynomial and the received polynomial regenerates the intended message. This code has the advantage of being able to correct for burst-errors, erasures, efficiently implementable on dedicated chips for its use and achieves the Singleton Bound. It is still extensively used in Barcodes, satellite communication, to burn DVD's/CD's, and in some RAID storage models as well.

 

Slides of the presentation

 

 

6.     Topic: Random Walks and Markov Chains

Date: 8th March 2013

Group: Nimantha Thushan Baranasuriya, Girisha Durrel De Silva, Rahul SinghalKarthik YadatiZiling Zhou

 

In this talk, the group covered how Random Walks and Markov Chains can be used to model problems and to analyse randomized solutions that solve those problems. In the first half of the presentation, the group introduced the concepts of Random Walks and Markov Chains and discussed examples to better explain the two concepts.

 

Next, the group focused on explaining in detail three problems that used Random Walks and Markov Chains for modelling and analysis: 2SAT, 3SAT and Card Shuffling. For each of these problems, the group discussed randomized solutions, and the algorithms were analysed using Random Walks and Markov Chains concepts that were introduced in the first half of the presentation.

 

Slides of the presentation

 

 

7.     Topic: PageRank

Date: 22nd March 2013

Group: Tian Shangxuan, He Xiangnan, Ji Kun, Guan Peiyong and Wang Hancheng.

 

PageRank is one of the most important algorithms used by Google search engine, developed by its current CEO Larry Page and co-founder Sergey Brin. In this talk, the group presented PageRank, Topic-Sensitive PageRank and their applications in depth and breadth.

 

The first half of the presentation focuses on PageRank. Elements of a general search engine are clearly explained to give the audience an overall picture, so that the purpose of PageRank is in the whole search engine is well understood. Generally, there are two essential components for ranking a page in most search engines, which are: query dependent score and query independent score which measure the relevance of the webpage to a given query and the importance of the webpage itself, respectively. PageRank focuses on the latter, i.e., ranking the importance of all web pages. The basic idea of PageRank is that the importance of a webpage is based on the importance of those web pages that pointing to it, not the number of pages. Followed from that, a simplified PageRank algorithm is given and random surfer model is used to explain how the algorithm works. However, the simplified PageRank has issues in handing dangling nodes, rank sinks and cycles. Using the theory from Markov chain, the group further proved the convergence of the algorithm, when the stochasticity and primitivity adjustments are applied.

 

The second half of the presentation focuses on Topic-Sensitive PageRank (TSPR) and applications of PageRank. TSPR utilizes a personalized vector, which allows different weighting of user-specified topics. With the understanding of TSPR, the group further compared the differences and relationships among the popular ranking algorithms, including HITS, PageRank, SALSA and TSPR. Lastly, the group presented two applications of PageRank: TrustRank to tackle the link farm issue and ImageRank to rank images based on their visual appearance.

 

In summary, searching on the large and heterogeneous Internet is not an easy task. Different techniques including PageRank, Topic-Sensitive PageRank, key word matching, natural language processing can be combined to further enhance the accuracy meaningfulness of the searching results with the objective of satisfying the needs of the diversified Internet users. 

 

Slides of the presentation

 

 

8.     Topic: Approximate Counting

Date: 28th March 2013

Group : SUBHASREE BASU, RAJIV RATN SHAH, VENKATA KIRAN YEDUGUNDLA, VU VINH AN.

 

In this presentation, the group explained how randomization and approximation techniques could help in approximating the solution for some hard counting problems in time which is polynomial in the input size. At first, they introduced the counting problem, its application, difficulty and terminology. They also gave the definitions for A Polynomial Approximation Scheme (PAS), A Fully Polynomial Approximation (FPAS), A Polynomial Randomized Approximation Scheme (PRAS) and A Fully Polynomial Randomized Approximation Scheme (FPRAS). After that they discussed two problems: the Disjunctive Normal Form (DNF) counting problem and the Approximating the permanent problem.

 

In the DNF counting problem, one tries to approximate number of satisfying assignments for the given DNF. The group first described some unsuccessful attempts to solve the problem, followed by the ‘coverage algorithm’. The coverage algorithm tries to reduce the sample space while ensuring that the set of satisfying assignments is correctly represented.

 

In the approximating the permanent problem, the permanent of 0-1 matrix is shown to be equivalent to the number of perfect matchings in a bipartite graph. A near-uniform generation for set union is then generated using random walk on Markov Chain to estimate the number of perfect matchings of the bipartite graph. Canonical path argument is used for proving the polynomial time complexity of the algorithm in details.

 

Slides of the presentation

 

9.     Topic : Karmarkar’s interior point algorithm

Date: 11th April 2013

Group : Aissan, Anup, Seung Min, Le, David, Narmada Sambaturu.

 

This talk extends the concepts introduced in the talk on ellipsoid algorithm by presenting Karmarkar's algorithm, which can solve the Linear Programming problem in a faster polynomial time. Karmarkar's algorithm is an interior point algorithm, and starts with an initial feasible point and moves this point toward the optimal vertex in each iteration.

 

The algorithm introduces new constraints into the LP by expecting the input to be in a specific form (Karmarkar's canonical form). These constraints are then exploited to get a faster running time. The following two tricks help Karmarkar's algorithm achieve a fast polynomial running time:

 

1.     Optimization over a ball: The biggest possible ball is drawn inside the feasible region, and all moves made are of length slightly less than the radius of this ball. This way we are guaranteed to stay within the feasible region even though we never check for feasibility. The direction chosen for each move is the direction in which the cost function is reducing the fastest (assuming the objective is to minimize the cost).

 

2.     Projective transformation: The feasible region is transformed at each step in such a way that the current point is in the center of the transformed region. This way the largest ball in the feasible region will have center at the current point, and the largest possible move toward optimal can be made.

 

The presentation introduced the problem to be solved and the intuition behind the techniques used to achieve fast running time. The details of these techniques were then developed, including derivations of some of the transformations. This was followed by an analysis of the running time to prove that the algorithm converges in O(nL) iterations where L is the number of bits needed to represent the LP, and n is the number of dimensions. Analysis was also done to show that work per iteration can be done in O(n^2.5) time, making the overall running time O(n^3.5L). A procedure for converting any linear programming problem to Karmarkar's canonical form was then outlined, thus completing a start-to-finish solution to solve any given LP problem in O(n^3.5L) time.

 

Slides of the presentation

 

 

10.  Topic : Approximation algorithms: Sparsest cut.

Date: 12th April 2013

Group : Supartha, Aarthi, Etkin, Darly Seah, Rajendra.

 

This talk covered the Sparsest Cut problem along with an O(log k) factor randomized poly-time approximation algorithm for the problem, where k is the number of commodities. The introduction discussed the relationship between maximum-flow and minimum-cut problems, highlighting that the Max-Flow Min-Cut Theorem does not apply to Sparsest Cut and the corresponding Demands Multi-Commodity Flow problem since they are defined with respect to multiple commodities.

 

Since Sparsest Cut is NP-Hard, an alternate formulation for it as an integer program is also NP-Hard. To overcome this, the relaxation of the problem to a Linear program was shown where the objective function (for LP & IP) used cut metrics as variables of the program. The connection between the solutions to these programs and metric spaces was pointed out, especially as the conical combination of all cut metrics (i.e. the set CUTn) is an l1-metric space. Thus, solving the LP gives an arbitrary metric space and metric embeddings are used to find an approximation to the l1-metric space which gives rise to the O(log k) approximation factor.

 

Some intuition regarding the embedding itself can be explained as follows: The optimal Sparsest Cut can be a conical combination of an exponential number of cut metrics which would take exponential time to solve; but solving the LP relaxation gives some combination of cut metrics now no longer in l1 but in an arbitrary metric space whose dimension is also unknown and could be unbounded. The embedding in a randomized fashion, with high probability, creates an l1 metric defined in an O(log^2 k) dimension space. This dimension restriction makes it poly-time computable. The l1 metric obtained essentially forms a conical combination of some polynomial number of cut metrics on the vertices of the original graph s.t. the sparsity of the cut they define is at most an O(log k) factor of the optimal.

 

Slides of the presentation

 

 

11.  Topic: Stream codes

Date 18th April 2013

Group: Wangsheng, Ye Hong, Jin Yong, Wangwei, Gordon.

 

The team presented the topic on stream codes and in particular focused on lossless data compression techniques. To start, the team first gave an overall review of Huffman code and then the focus was on Arithmetic and Lempel-Ziv coding techniques.

Huffman coding technique assigns fewer bits to symbol that occurs more frequently and more bits to symbols that occur less frequently. A tree is build by adding up two symbols with the smallest probability at each stage and the process is continued until there is exactly 1 node left on the tree where the probability is then 1.

Arithmetic coding works on the idea based on a probabilistic model of source symbols. The interval [0,1) is partitioned iteratively until we hit the end of the string; the final interval is then sent over to the decoder which decodes with the same technique used in the encoding process. We can also shift from a static probabilistic model into an adaptive model by adjusting the probabilities when a new symbol comes. A simple Bayesian model was introduced by the group to facilitate this discussion. Finally, the group introduced the loss analysis for both cases where an end of file symbol is present and not. The coding rate was shown to reach the optimal Shannon bound.

Lempel-Ziv algorithm is a lossless data compression technique that works on the principle of building up a dictionary rather than having a probabilistic model. Words are partitioned into substrings and added as an entry into the dictionary and sent over in the form of binary. The decoding process is similar. Finally, to end the talk, a brief summary was given to recap the content that was covered. 

Slides of the presentation

 

 

12.  Topic: Online Algorithms

Date 19th April 2013

Group: Rahul Singhal, Nimantha Thushan Baranasuriya, Girisha Durrel De Silva, Karthik YadatiZiling Zhou

The group first introduced the basic concept of online algorithms. Thereafter the group introduced the concept of the competitive ratio which is required to analyse online algorithms.

Once the introduction was done, the group introduced a simple online algorithm called the ski rental problem. After introducing the ski rental problem the group introduced another online algorithm called the secretary hiring problem.

The next phase of the presentation was related to the k-server problem. The group first introduced the k-server problem and then established several lemmas and definitions that are needed to examine algorithms related to k-sever problems.

In the final part of the presentation the group introduced a simple deterministic online algorithm for k-server problems called the BAL (balanced) algorithm. The group ended the presentation by discussing a lower bound for the competitive ratio related to the k-sever problem while introducing a state of the art k-server algorithm called WFA (Work Function Algorithm).

 

Slides of the presentation