Perception and Prediction at ZOOX by James Philbin · · 573 views · ICML 2019











Embed <div id="presentation-embed-38917893"></div> <script src='https://slideslive.com/embed_presentation.js'></script> <script> embed = new SlidesLiveEmbed('presentation-embed-38917893', { presentationId: '38917893', autoPlay: false, // change to true to autoplay the embedded presentation verticalEnabled: true }); </script>

This workshop is a joint effort between the 4th ICML Workshop on Human Interpretability in Machine Learning (WHI) and the ICML 2019 Workshop on Interactive Data Analysis System (IDAS). We have combined our forces this year to run Human in the Loop Learning (HILL) in conjunction with ICML 2019! The workshop will bring together researchers and practitioners who study interpretable and interactive learning systems with applications in large scale data processing, data annotations, data visualization, human-assisted data integration, systems and tools to interpret machine learning models as well as algorithm designs for active learning, online learning, and interpretable machine learning algorithms. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the process. This workshop serves as a platform where researchers can discuss approaches that bridge the gap between humans and machines and get the best of both worlds. We welcome high-quality submissions in the broad area of human in the loop learning. A few (non-exhaustive) topics of interest include: Systems for online and interactive learning algorithms, Active/Interactive machine learning algorithm design, Systems for collecting, preparing, and managing machine learning data, Model understanding tools (verifying, diagnosing, debugging, visualization, introspection, etc), Design, testing and assessment of interactive systems for data analytics, Psychology of human concept learning, Generalized additive models, sparsity and rule learning, Interpretable unsupervised models (clustering, topic models, etc.), Interpretation of black-box models (including deep neural networks), Interpretability in reinforcement learning.