CS647 Advanced Topics in Algorithms (Spring 2020)
Pre-requisites: CS447/547 or consent of instructor
Instructor: Christino Tamon
Lecture: TR 14:30-15:45pm, SN118 (moving to TAC206)
Office hours:
TR 10:45-12:00, 13:00-14:15 (via Zoom). Note: please email to setup a Zoom chat.
Syllabus
The objective of this course is to explore advanced topics related to complexity of algorithmic problems
from areas of computer science, such as, machine learning, cryptography, and others.
Announcement (post Spring Break):
We will hold online live (on our regular time Tue-Thu 2:30-3:45 EST) classes (via Zoom) which will be recorded.
The invite links to the Zoom class session and to the recordings will be available on Moodle.
Be well and stay safe. See all of you in Zoom.
Grading: Class participation, scribe notes, exercises (50%). Project: report, talk (50%).
Text: None.
In place of a text book, we will focus on a list of papers for reading and discussion.
Policies:
Although attendance is not mandatory, students are responsible for all course materials
covered in lectures and any exams given during class periods. Students that need to
make up missing course work must provide the required Clarkson official exempt form.
All students must submit their own work; the exchange of ideas are encouraged but
ultimately the submitted work must be the student's own. Please refer to the Clarkson
University Regulations for more guidelines on academic integrity and related matters.
Topics (tentative)
- Optimization in Markov models.
- Tools: information theory and statistics.
Readings
- Mohri, Rostamizadeh, Talwalkar. Foundations of Machine Learning, MIT Press, 2012.
Chapter 14 "Reinforcement Learning", [pdf].
- Blackwell. "Discounted dynamic programming," The Annals of Mathematical Statistics 36(1):226-235, 1965.
[pdf].
- Madani, Hanks, Condon. "On the undecidability of probabilistic planning and related stochastic optimization problems,"
Artificial Intelligence 147:5-34, 2003.
[pdf].
- Tishby and Polani. "Information Theory of Decisions and Actions,"
[pdf].
Notes (under construction)
- Markov chains:
1.
- Concentration bounds:
2.
- Information theory:
3.
- Probability theory:
4.
Random topics
- Undecidability in learning:
- Ben-David, Hrubes, Moran, Shpilka, and Yehudayoff. "Learnability can be undecidable,"
Nature Machine Intelligence 1, 44-48, 2019.
link.
- Madani, Hanks, Condon. "On the undecidability of probabilistic planning and related stochastic optimization problems,"
Artificial Intelligence 147:5-34, 2003.
[pdf].
- Information theory:
- Siboni, Cohen. "Universal anomaly detection,"
arXiv.
- Ron, Singer, Tishby. "The power of amnesia,"
pdf.
- Tishby, Zaslavsky. "Deep learning and the information bottleneck principle,"
arXiv.
- More Markov models:
References
- Azar, Broder, Karlin, Linial, Phillips. "Biased random walks,"
Combinatorica 16(1):1-18, 1996. [pdf].
- Journal of Machine Learning Research.
- Reinforcement learning: Langford.
book (Sutton, Barto).
Survey (Kaelbling, Littman, Moore).
paper (regret bound).
Agarwal, Jiang and Kakade:
BOOK.
- People: Duchi.
Bartlett.
Wainwright.
Recht.
Long.
Arora.
Schapire.
Moran.
Kakade.
Brunskill.
Pineau.