Adaptive & Multitask Learning

Algorithms & Systems

June 15 | ICML 2019 Workshop | Long Beach, CA, USA

amtl2019organizers@gmail.com

Overview

Driven by progress in deep learning, the machine learning community is now able to tackle increasingly more complex problems—ranging from multi-modal reasoning to dexterous robotic manipulation—all of which typically involve solving nontrivial combinations of tasks. Thus, designing adaptive models and algorithms that can efficiently learn, master, and combine multiple tasks is the next frontier.

Establishing connections between approaches developed for seemingly different problems is particularly useful for analyzing the landscape of multitask learning techniques. For instance, the compositionality of language tasks motivates modular architectures, which can also be used to construct modular policies in order to tackle the compositional structure of robotic manipulation. Similarly, ideas from personalization in recommender systems (where serving different users can be regarded as different tasks) might perhaps be effective when applied to learning problems in healthcare or when used for adapting controllers in fleets of autonomous vehicles.

AMTL workshop aims to bring together machine learning researchers from areas ranging from theory to applications and systems, to explore and discuss:

  • advantages, disadvantages, and applicability of different approaches to learning in multitask settings,
  • formal or intuitive connections between methods developed for different problems that help better understand multitask learning techniques and inspire technique transfer between research lines,
  • fundamental challenges and open questions that the community needs to tackle for the field to move forward.

News

March 29: Call for papers is posted.

April 8: Submission open.

Call for Papers

We invite the submission of short papers, up to 4 pages (excluding reference and supplementary material). Papers should be in ICML 2019 format (see the official style guidelines). Submissions will be accepted as contributed talks or poster presentations. Accepted papers will be posted on the website but there will not be archival proceedings.

Paper submission: OpenReview (opens on April 8th).

Relevant topics include but are not limited to:

Algorithms & Theory

  • Novel algorithms for adaptive and multitask learning (e.g., for learning context-aware or personalized models).
  • Understanding meta-learning, multitask learning, never-ending learning, few-shot learning, learning personalized models.
  • Understanding the capacity of different models in the context of multi-task learning.
  • Understanding per-task data efficiency and sample complexity.
  • Theoretical guarantees for transfer learning, domain adaptation, etc.
  • Learning representations that enable efficient multitask learning.
  • Learning algorithms, agents, and systems in dynamic/complex environments.

Algorithms & Systems

  • Scalable approaches to multitask learning and adaptation.
  • Distributed and federated learning algorithms and systems.
  • Pre-training and fine-tuning approaches and best practices.
  • Weak supervision and automated data labeling.
  • Modular AI/ML algorithms and systems.

Applications

  • Applications of adaptive and multitask algorithms in recommender, robotic, and healthcare systems.
  • Real-world problems and benchmarks for few-shot learning.

Additionally, we welcome and encourage position papers under this workshop theme. We are also particularly interested in papers that introduce benchmark datasets, challenges, and competitions to further the progress of the field.

Important Dates

  • Submission opens: April 8, 2019
  • Submission deadline: May 1, 2019
  • Notification: May 25, 2019
  • Workshop: June 15, 2019

Speakers

Alex Ratner

(Stanford)

Ameet Talwalkar

(CMU, Determined AI)

Chelsea Finn

(Stanford, Berkeley, Google)

Rich Caruana

(Microsoft)

TBA

TBA

Organizing Committee


Program Committee

TBA

Sponsors

TBA