Schedule
The Summer School will contain lectures and exercises mainly covered by the four invited speakers. Local speakers will provide examples of domain adaptation in their research and relate the topic to the research at DIKU and DTU Informatics. In addition to the scientific program there will be a Summer School dinner Thursday evening.
A tentative program is available as pdf here.
Invited Lecturers
Corinna Cortes
The lectures will cover:
- Variations of support vector machines (SVMs) for domain adaptation.
- Boosting methods for domain adaptation.
Suggested reading material:
- Lorenzo Bruzzone and Mattia Marconcini: Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 5, May 2010.
- Yishay Mansour, Mariano Schain: Robust Domain Adaptation. Proceedings of the International Symposium on Artificial Intelligence and Mathematics, 2012.
- Paul Viola, Michael Jones: Robust Real-time Object Detection. International Journal of Computer Vision 57, no. 2: 137-154, 2001.
- Amaury Habrard, Jean-Philippe Peyrache, Marc Sebban: Domain Adaptation with Good Edit Similarities: a Sparse Way to deal with Scaling and Rotation Problems in Image Classification. ICTAI 2011. 181-188, 2011.
Mehryar Mohri
Lecture 1: Convex Optimization.
This lecture covers:
- convex optimization (definition, properties, theorems).
- linear programming (LP).
- quadratic programming (QP).
- semidefinite programming (SDP).
Suggested reading material:
- Convex Optimization. Boyd, Stephen P. and Vandenberghe, Lieven. Cambridge University Press, 2004.
- Foundations of Machine Learning. Mohri, Rostami, and Talwalkar, MIT Press, 2012. (Note that every attendant of the Summer School receives a copy of this book.)
Lecture 2: Adaptation in Regression: Generalization Bounds and Algorithms.
This lecture covers:
- learning bounds for regularization-based algorithm.
- discrepancy minimization algorithm and optimization.
- empirical results.
Suggested reading material:
- Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh: Domain Adaptation: Learning Bounds and Algorithms. COLT 2009.
- Corinna Cortes, Mehryar Mohri: Domain Adaptation in Regression. ALT 2011: 308-323.
- Mehryar Mohri, Andres Munoz Medina: New Analysis and Algorithm for Learning with Drifting Distributions. arXiv:1205.4343, 2012.
Yishay Mansour
Lecture 1: Generalization Bounds
This lecture covers:
- Review of generalization bound (VC dimension, Radamacher complexity)
- Single domain source adaptation bounds: discrepancy distance, bounded VC dimension bounds. Re-weighing source examples.
Suggested reading material:
- Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman Vaughan: A theory of learning from different domains. Machine Learning 79(1-2): 151-175, 2010.
- Yishay Mansour, Mehryar Mohri and Afshin Rostamizadeh: Domain Adaptation: Learning Bounds and Algorithms. COLT 2009.
Lecture 2: Multi-source Adaptation
The motivation is based on the observation that in many cases you have huge amounts of unlabeled data from the target domain, and fairly good classifiers from related domain. The goal is to combine those classifiers to predict in the new target domain, for which we have only unlabeled examples. The surprising result is that there exists a class of combination rules which guarantees, in certain aspects, to transfer the predictability power of the source domains to the new target domain.
Suggested reading material:
- Yishay Mansour, Mehryar Mohri and Afshin Rostamizadeh: Multiple Source Adaptation and the Renyi Divergence. UAI 2009.
- Yishay Mansour, Mehryar Mohri and Afshin Rostamizadeh: Domain Adaptation with Multiple Sources. NIPS 2008.
Trevor Darrell
Lectures: Domain Adaptation in Object Recognition
Domain adaptation is an important emerging topic in computer vision. I'll review recent studies of domain shift in the context of object recognition, and cover new methods that adapt object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. Transformations may be learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While I'll focus the presentation on object recognition tasks, the transform-based adaptation technique is general and could be applied to non-image data. I'll also describe methods for multi-source domain adaptation, and adaptation in the context of discriminative learning frameworks, as well as available online datasets relevant to vision-based domain adaptation.
Tutorial: Domain Adaptation in Object Recognition
In this practical exercise, we will explore domain adaptation problems and standard algorithms on the PASCAL motorbike->bike (Ref. 1) and the Office dataset (Ref. 2). The exercise is organized by Trevor Darrell and held by Kersten Petersen.
Suggested reading material:
- Yusuf Aytar and Andrew Zisserman: Tabula Rasa: Model Transfer for Object Category Detection. CVPR 2011.
- Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell: Adapting Visual Category Models to New Domains. ECCV 2010.
- Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell: What You Saw is Not What You Get: Domain Adaptation Using Asymmetric Kernel Transforms. CVPR 2011.
Guest Lecturers
Tobias Glasmachers
Lecture: Multi-class Support Vector Machines
Extending the standard Support Vector Machine classifier to problems with more
than two classes is a non-trivial issue, for which a multitude of solutions
has been proposed. The lecture will be a round trip through the diverse domain
of large-margin multi-category classification. We will see how most approaches
can be understood systematically within a unifying framework.
Suggested reading material:
The audience should be familiar with "standard" support vector machines for binary classification as presented in the introductory lectures.
Tutorial: Machine Learning with the Shark
The "Shark" machine learning library is a modular C++ library for the design and optimization of adaptive systems. In this hands-on practical course on machine learning with C++ we will go through the different steps of the machine learning processing chain: importing data, pre-processing, training a model, optimizing parameters, and testing the final performance.
Important: It is highly recommended that participants have the Shark library, Version 3, pre-installed (and example programs tested) on their laptop computers. Installing shark requires nothing but a modern C++ compiler and a recent version of the boost libraries. The Shark library is ready for download from
Suggested reading material:
- Christian Igel, Verena Heidrich-Meisner, and Tobias Glasmachers: Shark. Journal of Machine Learning Research 9, pp. 993-996, 2008
Marco Loog
Covariate Shift : Some Theory, Some Examples, and Some Observations
Suggested reading material:
Local Lecturers
Christian Igel
Lecture 1: Supervised Machine Learning
Machine learning is about developing and analyzing algorithms that automatically improve with experience. Such algorithms are already an integral part of today's computing systems, for example in search engines, recommender systems, or biometrical applications. This short lecture will provide a gentle introduction to supervised machine learning.
Suggested reading material:
Lecture 2: An Introduction to Markov Random Fields and Restricted Boltzmann Machines
The tutorial considers Markov random fields (i.e., undirected graphical models) with a special focus on restricted Boltzmann machines (RBMs). In the first part, undirected graphical models are introduced, where image denoising serves as an application example. The second part concentrates on RBMs, which are undirected graphical models describing stochastic neural networks. Recently, RBMs have raised attention as building block of deep belief networks (DBNs).
Marleen de Bruijne
Domain adaptation in medical imaging
This talk will cover current approaches to coping with common variations in medical imaging data as well as a proof-of-concept study of transfer learning for medical image segmentation.