Workshop on Hardware and Algorithms for Learning On-a-chip

(HALO) 2015

November 5th,  2015

Room: Phoenix Central, Doubletree Hotel, Austin, TX

Registration through ICCAD

Machine learning algorithms, such as those for image based search, face recognition, multi-category classification, and scene analysis, are being developed that will fundamentally alter the way individuals and organizations live, work, and interact with each other. However their computational complexity still challenges the state-of-the-art computing platforms, especially when the application of interest is tightly constrained by the requirements of low power, high throughput, small latency, etc.

In recent years, there have been enormous advances in implementing machine learning algorithms with application-specific hardware (e.g., FPGA, ASIC, etc.). There is a timely need to map the latest learning algorithms to physical hardware, in order to achieve orders of magnitude improvement in performance, energy efficiency and compactness. Recent progress in computational neurosciences and nanoelectronic technology, such as resistive memory devices, will further help shed light on future hardware-software platforms for learning on-a-chip.

The overarching goal of this workshop is to explore the potential of on-chip machine learning, to reveal emerging algorithms and design needs, and to promote novel applications for learning. It aims to establish a forum to discuss the current practices, as well as future research needs in the fields below.

Report from HALO 2015 (PDF)

Key Topics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Synaptic plasticity and neuron motifs of learning dynamics

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Computation models of cortical activities

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Sparse learning, feature extraction and personalization

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Deep learning with high speed and high power efficiency

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware acceleration for machine learning

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Hardware emulation of brain

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Nanoelectronic devices and architectures for neuro-computing

Description: Description: Description: Description: Description: Description: Description: C:\Paper Review\Modeling\2012\webpage\dot.jpg   Applications of learning on a smart mobile platform

8:00am 8:15am

Introduction and opening remarks

8:15am 8:50am

Keynote talk

Jason Cong (UCLA): Machine Learning on FPGAs

8:50am 10:30am

Session 1: Hardware Acceleration of Machine Learning, Chair: Jae-sun Seo (ASU)

Eugenio Culurciello (Purdue): Deep Learning in Practice

Eric Chung (Microsoft): Toward Accelerating Deep Learning at Scale Using Specialized Hardware in the Datacenter

Jian Li (Huawei): System Optimization of Symbiotic Data Management in Cognitive Computing Cloud

Yasuki Tanabe (Toshiba): Design of Heterogeneous Multi-core SoCs for Image Recognition and an Architecture for Deep Convolutional Networks

10:30am 10:45am

Coffee Break

10:45am 12:00pm

Session 2: Frontiers of Learning Algorithms, Chair: Wenyao Xu (SUNY Buffalo)

Hector Valdez (Raytheon): Hierarchical Deep ATR

Peter Chin (Draper Lab): Applications of Deep Networks - from Github to Brain

Jeff Schneider (Uber): Machine Learning Algorithms I Wish We Had in Hardware

12:00pm 1:30pm

Lunch

1:30pm 2:45pm

Session 3: Frontiers of Hardware Design, Chair: Dhireesha Kudithipudi (RIT)

Andrew Cassidy (IBM): The TrueNorth Neurosynaptic Processor: Cognitive Computing at Low-Power, Real-Time, and Large-Scale

Emre Neftci (UCSD): Neuromorphic Learning Machines

Damien Querlioz (CNRS): Revisiting Memory for Neuroinspired Systems

2:45pm 4:30pm

Session 4: Poster Session

4:30pm 5:00pm

Workshop summary

Technical Program Committee

Co-chairs: Yu (Kevin) Cao, Arizona State University

Xin Li, Carnegie Mellon University

Jieping Ye, University of Michigan

 

Rob Aitken, ARM

Shawn Blanton, Carnegie Mellon University

Sankar Basu, National Science Foundation

Maxim Bazhenov, University of California, Riverside

Yiran Chen, University of Pittsburgh

Kailash Gopalakrishnan, IBM

Yiorgos Makris, University of Texas, Dallas

Kendel McCarley, Raytheon

Mustafa Ozdal, Bilkent University, Turkey

Jae-sun Seo, Arizona State University

Yuan Xie, University of California, Santa Barbara

Chris Yu, Charles Stark Draper Laboratory

                            

 

Last updated on November 30, 2015. Contents subject to change. All rights reserved.