Emerging Deep Learning Accelerators

In this HiPEAC 2019 workshop, we are hoping to bring together researchers to discuss requirements, opportunities, challenges and next steps in developing novel approaches for accelerating deep neural networks.

Find Out More

HiPEAC 2019 workshop

23 January, 2019

Valencia, Spain

Deep Learning is receiving much attention these days due to remarkable performance achieved in several fields (e.g. Computer Vision, Speech, Translations, etc), although this brings some challenges to hardware architects and computation optimization researchers. Deep Learning models are generally very large in memory and require many computation instructions to train and perform inferences. Accelerating these operations has obvious advantages, first by reducing the energy consumption (e.g. in data centers) and secondly, making these models usable on smaller devices at the edge of the Internet. This workshop aims to enable discussions on emerging acceleration techniques and computation paradigms for deep learning algorithms. The timing of this workshop is ideal, with European regulations tightening data privacy, thus forcing more computations/inferences to be performed at the Edge.

Call For Contributions


Topics of interest include (but are not limited to):

  • Novel parallel computing architectures​: GPUs, FPGAs, and heterogeneous multi/many-core designs.

  • Crazy architectural ideas: focused on accelerating deep learning workloads/algorithms.

  • Cloud and edge computing: hardware and software methods focused on accelerating both training (cloud) and inference (edge).

  • Compilers, tools, and programming models: focused on accelerating deep learning workloads/algorithms.

Important Dates

  • Submission deadline: November 9th November 23th, 2018 (11:59 PM PDT)

  • Decisions to authors: December 14th, 2018

Paper Format

Regular (up to 9 pages) or short (up to 5 pages) paper using SIGCHI Extended Abstract format. Papers should be in PDF format and not anonymized.

Submission Site

Submissions can be made at easychair.org/conferences/?conf=edla2019.

Submission Options

Papers will be reviewed by the workshop's technical program committee according to criteria regarding a submission's quality, relevance to the workshop's topics, and, foremost, its potential to spark discussions about directions, insights, and solutions in the context of deep learning accelerators. Research papers, case studies, and position papers are all welcome.

In particular, we encourage authors to keep the following options in mind when preparing submissions:

  • Works-In-Progress: To facilitate sharing of thought-provoking ideas and high-potential though preliminary research, authors are welcome to make submissions describing early-stage, in-progress, and/or exploratory work in order to elicit feedback, discover collaboration opportunities, and generally spark discussion.

Program (tentative)

Time Event (in room 9) 23rd January 2019
9:00–9:45 Keynote speaker
9:45–10:30 Papers presentation
10:30–11:00 Coffee break
11:00–12:30 Paper presentations

Organizers and TPC

Jose Cano (University of Glasgow)

Valentin Radu (University of Edinburgh)

David Gregg (Trinity College Dublin)

Nuria Pazos (University of Applied Sciences (HES-SO))

Elliot Crowley (University of Edinburgh)

Miguel de Prado (ETH Zurich)

Jack Turner (University of Edinburgh)

Andrew Mundy (ARM Research)

Tim Llewellynn (NVISO)


If you have any questions, please feel free to send an email to edla-info@inf.ed.ac.uk.