Workshop on Hardware and Algorithms for Learning On-a-chip

(HALO) 2017

November 16,  2017

Irvine Marriott Hotel, Irvine, CA

(Programs of previous workshops are available: 2015 2016)

Call for Posters, due: October 30, 2017. Student travel support available

Machine learning algorithms, such as those for image based search, face recognition, multi-category classification, and scene analysis, are being developed that will fundamentally alter the way individuals and organizations live, work, and interact with each other. However their computational complexity still challenges the state-of-the-art computing platforms, especially when the application of interest is tightly constrained by the requirements of low power, high throughput, small latency, etc.

In recent years, there have been enormous advances in implementing machine learning algorithms with application-specific hardware (e.g., FPGA, ASIC, etc.). There is a timely need to map the latest learning algorithms to physical hardware, in order to achieve orders of magnitude improvement in performance, energy efficiency and compactness. Recent progress in computational neurosciences and nanoelectronic technology, such as resistive memory devices, will further help shed light on future hardware-software platforms for learning on-a-chip.

The overarching goal of this workshop is to explore the potential of on-chip machine learning, to reveal emerging algorithms and design needs, and to promote novel applications for learning. It aims to establish a forum to discuss the current practices, as well as future research needs in the fields below.

Key Topics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Synaptic plasticity and neuron motifs of learning dynamics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Computation models of cortical activities

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Sparse learning, feature extraction and personalization

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Deep learning with high speed and high power efficiency

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware acceleration for machine learning

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware emulation of brain

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Nanoelectronic devices and architectures for neuro-computing

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Applications of learning on a smart mobile platform

Tentative Program

8:15am 8:30am

Introduction and opening remarks

8:30am 9:15am

Keynote talk

Kaushik Roy (Purdue): Re-Engineering Computing For Next Generation Autonomous Intelligent Systems

9:15am 10:30am

Session 1: Hardware Acceleration of Machine Learning, Chair: Xiang Chen (GMU)

Hai Li (Duke): Running Sparse and Low-Rank Neural Networks

Paul Whatmough (ARM): Deep Neural Network Accelerators for IoT Applications

Farinaz Koushanfar (UCSD): M^2L: Bringing the Machine into the loop of Machine Learning

10:30am 10:45am

Coffee Break

10:45am 12:00pm

Session 2: Intelligent Systems and Applications, Chair: Emre Neftci (UCI)

Karam Chatha (Qualcomm): Deep Learning SW and HW on Mobile SoCs

Suyog Gupta (Google): HW/Algorithm Co-Design for Efficient On-Device Inference

Yongpan Liu (Tsinghua): RRAM-based Nonvolatile Intelligent Processor for Energy Harvesting IoE System

12:00pm 1:30pm

Lunch

1:30pm 3:10pm

Session 3: Neuromorphic Computing, Chair: Sean Li (Qualcomm)

Mike Davies (Intel): Lessons Learned In Pursuit of Scalable Neuromorphic On-Chip Learning

Bipin Rajendran (NJIT): Supervised Learning in Spiking Neural Networks

Gert Cauwenberghs (UCSD): Scalable Silicon Neuromorphic learning Machines with Hierarchical Reconfigurable Synaptic Connectivity and Plasticity

Emre Neftci (UCI): Neuro-Inspired Learning Machines: Bridging the Gap between Machine Learning and Neuromorphic Engineering

3:10pm 4:30pm

Session 4: Poster Session

4:30pm 5:00pm

Workshop summary

Technical Program Committee

Co-chairs: Jae-sun Seo, Arizona State University

Yu (Kevin) Cao, Arizona State University

Xin Li, Duke University

 

Rob Aitken, ARM

Shawn Blanton, Carnegie Mellon University

Sankar Basu, National Science Foundation

Maxim Bazhenov, University of California, San Diego

Yiran Chen, University of Pittsburgh

Kailash Gopalakrishnan, IBM

Yiorgos Makris, University of Texas, Dallas

Kendel McCarley, Raytheon

Mustafa Ozdal, Bilkent University, Turkey

Yuan Xie, University of California, Santa Barbara

Chris Yu, Charles Stark Draper Laboratory

                             

 

Last updated on October 8, 2017. Contents subject to change. All rights reserved.