Workshop on Hardware and Algorithms for Learning On-a-chip

(HALO) 2016

November 10,  2016

Doubletree Hotel, Austin, TX

(Programs of previous workshops are available: 2015)

Machine learning algorithms, such as those for image based search, face recognition, multi-category classification, and scene analysis, are being developed that will fundamentally alter the way individuals and organizations live, work, and interact with each other. However their computational complexity still challenges the state-of-the-art computing platforms, especially when the application of interest is tightly constrained by the requirements of low power, high throughput, small latency, etc.

In recent years, there have been enormous advances in implementing machine learning algorithms with application-specific hardware (e.g., FPGA, ASIC, etc.). There is a timely need to map the latest learning algorithms to physical hardware, in order to achieve orders of magnitude improvement in performance, energy efficiency and compactness. Recent progress in computational neurosciences and nanoelectronic technology, such as resistive memory devices, will further help shed light on future hardware-software platforms for learning on-a-chip.

The overarching goal of this workshop is to explore the potential of on-chip machine learning, to reveal emerging algorithms and design needs, and to promote novel applications for learning. It aims to establish a forum to discuss the current practices, as well as future research needs in the fields below.

Key Topics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Synaptic plasticity and neuron motifs of learning dynamics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Computation models of cortical activities

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Sparse learning, feature extraction and personalization

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Deep learning with high speed and high power efficiency

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware acceleration for machine learning

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware emulation of brain

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Nanoelectronic devices and architectures for neuro-computing

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Applications of learning on a smart mobile platform

Tentative Program

8:00am 8:15am

Introduction and opening remarks

8:15am 8:50am

Keynote talk

Rob A. Rutenbar (UIUC): Hardware Inference Accelerators for Machine Learning

8:50am 10:05am

Session 1: Hardware Acceleration of Machine Learning, Chair: Ganesh Dasika (ARM)

Eriko Nurvitadhi (Intel): Can FPGAs beat GPUs in Accelerating Next-Generation Deep Neural Networks?

Song Han (Stanford/Deephi): From Model to FPGA: Software-Hardware Co-Design for Efficient Neural Network Acceleration

Vivienne Sze (MIT): Energy-Efficient Hardware for Embedded Vision and Deep Convolutional Neural Networks (slides)

10:05am 10:20am

Coffee Break

10:20am 12:00pm

Session 2: Intelligent Systems and Applications, Chair: Amit Trivedi (UIC)

Andrew Howard / Dmitry Kalenichenko (Google): Mobile Vision Models: Efficient Model Architectures, Inference and Training

Huafeng Yu (Boeing): Mobile Autonomous Systems: A Safety Perspective

Yu Cheng (IBM): Learning Effective Representations with Electronic Medical Records: A Deep Learning Framework

Asim Roy (ASU): Massively Parallel Hardware to Learn from High-dimensional High-speed Streaming Data at the Edge of IoT

12:00pm 1:30pm

Lunch

1:30pm 2:45pm

Session 3: Neuromorphic Computing, Chair: Peter Chin (Draper Lab)

Kaushik Roy (Purdue): Unsupervised Regenerative Learning: Enabling On-chip Intelligence in Deep Spiking Networks

Catherine D. Schuman (ORNL): Research Challenges in Neuromorphic Computing: A Computer Science Perspective

Thiago S. Mosqueiro (UCSD): Fast and Stable Discrimination in Divergent-Convergent Neural Networks: From Deep Learning back to Neuroscience

2:45pm 4:30pm

Session 4: Poster Session

4:30pm 5:00pm

Workshop summary

Technical Program Committee

Co-chairs: Yu (Kevin) Cao, Arizona State University

Xin Li, Carnegie Mellon University

Jae-sun Seo, Arizona State University

 

Rob Aitken, ARM

Shawn Blanton, Carnegie Mellon University

Sankar Basu, National Science Foundation

Maxim Bazhenov, University of California, San Diego

Yiran Chen, University of Pittsburgh

Kailash Gopalakrishnan, IBM

Yiorgos Makris, University of Texas, Dallas

Kendel McCarley, Raytheon

Mustafa Ozdal, Bilkent University, Turkey

Yuan Xie, University of California, Santa Barbara

Chris Yu, Charles Stark Draper Laboratory

                             

 

Last updated on December 1, 2016. Contents subject to change. All rights reserved.