W01.2 Invited Session 1: Design-for-dependability for AI hardware accelerators in the edge
Abstract: AI has seen an explosion in real-world applications in the recent years. For example, it is the backbone of self-driving and connected cars. The design of AI hardware accelerators to support the intensive and memory-hungry AI workloads is an on-going effort aiming at optimizing the energy-area trade-off. This special session will focus on dependability aspects in the design of AI hardware accelerators. It is often tacitly assumed that neural networks on hardware inherit the remarkable fault tolerance capabilities of the biological brain. This assumption has proven to be false in recent years by a number of fault injection experiments. The three talks will cover reliability assessment and fault tolerance of Artificial Neural Networks and Spiking Neural Networks implemented in hardware, as well as the impact of approximate computing on the fault tolerance capabilities.
Presentations:
- Fault Tolerance of Neural Network Hardware Accelerators for Autonomous Driving
Adrian Evans (CEA-LETI, Grenoble, France), Lorena Anghel (Grenoble-INP, SPINTEC, Grenoble, France), and Stéphane Burel (CEA-LETI, Grenoble, France)
- Exploiting Approximate Computing for Efficient and Reliable Convolutional Neural Networks
Alberto Bosio (École Centrale de Lyon, INL, Lyon, France)
- Reliability Assessment and Fault Tolerance of Spiking Neural Network Hardware Accelerators
Haralampos-G. Stratigopoulos (Sorbonne University, CNRS, LIP6, Paris, France)