NIPS 2016

Saturday 10 December 2016
Room 114, CCIB
Barcelona, Spain

Home

Schedule

Presenters and Papers

Invited Speakers' Abstracts

Artificial Intelligence for Data Science

Invited Speakers' Abstracts

Automated Data Cleaning via Multi-View Anomaly Detection

Tom Dietterich - Oregon State University
9:10 10th December 2016

One of the first steps in the data analysis pipeline is data cleaning: detecting data from failed sensors. This talk will discuss the application of anomaly detection algorithms to find and remove bad readings from weather station data. We will review our previous work on DBN time series models and our current work on applying non-parametric anomaly detection algorithms as part of our SENSOR-DX multi-view anomaly detection architecture. A major challenge in evaluating these algorithms is to obtain ground truth, because real sensor data tends to be labeled conservatively by domain experts.

Automated Model Construction and The Automated Statistician

Christian Steinruecken - University of Cambridge
11:00 10th December 2016

Abstract

Why should I trust you? Explaining the predictions of machine-learning models

Carlos Guestrin - University of Washington
14:00 10th December 2016

Despite widespread adoption, machine-learning models remain mostly black boxes, making it very difficult to understand the reasons behind a prediction. Such understanding is fundamentally important to assess trust in a model before we take actions based on a prediction or choose to deploy a new ML service. Understand the reasons behind predictions further provides insights into the model, which can be used to turn an untrustworthy model or prediction into a trustworthy one.

In this talk, we a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction, as well as a method to explain models by presenting representative individual predictions and their explanations in a nonredundant way. We demonstrate the flexibility of these methods by explaining different models for text (e.g., random forests) and image classification (e.g., deep neural networks) and explore the usefulness of explanations via novel experiments, with human subjects. These explanations empower users in various scenarios that require trust, such as deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and detecting why a classifier should not be trusted.

Advances and Challenges in Automated Machine Learning: Blackbox Optimization and Beyond

Frank Hutter - University of Freiburg
15:30 10th December 2016

How do we select which machine learning model to use for a given dataset, with which pre- and post-processing steps, and with which hyperparameter setting?

In this talk, I will first review how Bayesian optimization can be used to tackle these problems as a joint blackbox optimization problem, thereby enabling automated machine learning. Then, I will discuss extensions of Bayesian optimization that go beyond this traditional blackbox formulation to effectively attack problems where evaluating a single hyperparameter setting can require as long as a week.