-
TypeJournal Article
-
Published in
-
Year2021
-
Author(s)
Gebregirogis, Anteneh and Tahoori, Mehdi -
URL
-
AccessOpen access
-
ID
1012051
Approximate Learning and Fault-Tolerant Mapping for Energy-Efficient Neuromorphic Systems
Brain-inspired deep neural networks such as Convolutional Neural Network (CNN) have shown great potential in solving difficult cognitive problems such as object recognition and classification. However, such architectures have high computational energy demand and sensitivity to variation effects, making them inapplicable for energy-constrained embedded learning platforms. To address this issue, we propose a learning and mapping approach that utilizes approximate computing during early design phases for a layer-wise pruning and fault tolerant weight mapping scheme of reliable and energy-efficient CNNs. In the proposed approach, approximate CNN is prepared first by layer-wise pruning of approximable neurons, which have high error tolerance margins using a two-level approximate learning methodology. Then, the pruned network is retrained to improve its accuracy by fine-tuning the weight values. Finally, a fault-tolerant layer-wise neural weight mapping scheme is adopted to aggressively reduce memory operating voltage when loading the weights of error resilient layers for energy-efficiency. Thus, the combination of approximate learning and fault tolerance aware memory operating voltage downscaling techniques enable us to implement robust and energy-efficient approximate inference engine for CNN applications. Simulation results show that the proposed fault tolerant and approximate learning approach can improve the energy-efficiency of CNN inference engines by more than 50% with less than 5% reduction in classification accuracy. Additionally, more than 26% energy-saving is achieved by using the proposed layer-wise mapping-based cache memory operating voltage down-scaling.
Something wrong with this information? Report errors here.