The time zone for all times mentioned at the DATE website is CEST – Central Europe Summer Time (UTC+1). AoE = Anywhere on Earth.

W05.6 Invited Talks: From Pass-Transistor-Logic to Computing-In-Memory

Session Start
Session End
Session chair
Leibin Ni, Huawei Technologies Co., Ltd., China
Session chair
Christian Weis, University of Kaiserslautern, Germany
Presentations

W05.6.1 Invited Talk I: "Research and Design of Pass Transistor Based Multipliers and their Design for Test for Convolutional Neural Network Computation"

Start
End
Speaker
Zhiyi Yu, Sun-Yat Sen University, Zhuhai, China
Speaker
Ningyuan Yin, Sun-Yat Sen University, Zhuhai, China

Abstract: Convolutional Neural Networks (CNN) are featured with different bit widths at different layers and have been widely used in mobile and embedded applications. The implementation of a CNN may include multipliers which might consume large overheads and suffer from a high timing error rate due to their large delay. The Pass transistor logic (PTL) based multiplier is a promising solution to such issues. It uses less transistors. It also reduces the gates in the critical path and thus reduces the worst case delay. As a result, the timing error rate is reduced. In this talk, we present PTL based multipliers and the design for test (DFT). An error model is built to analyze the error rate and to help with DFT. According to the simulation results, compared to traditional CMOS based multiplier, the operation ability (measured by Joule per operation, J/OPS) of PTL multipliers could be reduced by over 20%.

 

W05.6.2 Invited Talk II: "WTM2101: Computing-in-memory SoC"

Start
End
Speaker
Shaodi Wang, Zhicun (WITINMEM) Technology Co. Ltd., China

Abstract: In this talk, we will introduce an ultra-low-power neural processing SoC chip with computing-in-memory technology. We have designed, fabricated, and tested chips based on nonvolatile floating-gate technology nodes. It simultaneously solves the data processing and communication bottlenecks in NNs. Furthermore, thanks to the nonvolatility of the floating-gate cell, the computing-in-memory macros can be powered down during the idle state, which saves leakage power for an IoT uses, e.g., for voice commands recognition. The chip supports multiple NNs including DNN, TDNN, and RNN for different applications.

 

W05.6.3 Invited Talk III: "Implementation and performance analysis of computing-in-memory towards communication systems"

Start
End
Speaker
Zhihang Wu, Huawei Technologies Co., Ltd., China
Speaker
Leibin Ni, Huawei Technologies Co., Ltd., China

Abstract: Computing-in-memory (CIM) is an emerging technique to solve the memory-wall bottleneck. It can reduce data movement between memory and processor, and have significant power reduction in neural network accelerators, especially in edge devices. Communication system is facing the power issue and heat dissipation problem while implementaing the DSP algorithm with ASIC. It will have a great impact if CIM technique can be applied in communication systems to improve the energy efficiency. The talk will discuss computing-in-memory technique for communication systems. Some DSP modules (such as FIR, MIMO and FECs) will be re-organized and mapped onto computing-in-memory units as examples.