Domain Adaptation
Machine Learning today is applied to a wider and more diverse range of data than ever before. Speech recognition algorithms have moved from dictation with close-talking microphones to recognizing search queries over cell phones. Natural language processing models built from newswire need to be accurate on web pages, search queries, and tweets, and computer vision models must recognize objects in pictures taken from arbitrary angles from cameras of widely varying quality. Machine learning algorithms which explicitly attempt to be robust to these changes in data are known as domain adaptation algorithms.
In the past few years, machine learning researchers have formalized the domain adaptation problem as learning from a source distribution and evaluating on a different target distribution. There has been a host of new algorithms and theory based on this intuition, and we now have a much better knowledge of how domain adaptation relates to transfer learning, sample selection bias correction, learning with multiple objectives, and learning with side information.
Our summer school aims to bring together students and researchers who study the theoretical, algorithmic, and empirical aspects of learning with different source and target distribution. The research questions addressed by this Summer School are critical, ignoring them can lead to dramatically poor results. Some straightforward existing solutions based on importance weighting are typically not effective and can even worsen performance. Some of the questions addressed in this Summer School are: Which algorithms should be used for domain adaptation? Under what theoretical conditions will they be successful? How do these algorithms scale to large domain adaptation problems and how can they be applied to computer vision?