사업성과

학술대회

학술대회

Scalable software/hardware co-design for reliable complex DNN activation function acceleration
Author
JaeHyung Ko , Joon-Sung Yang
Conference
제25회 한국반도체테스트학술대회
File
Scalable software_hardware co_design for reliable complex DNN activation function acceleration.pdf (1.2M) 5회 다운로드 DATE : 2024-07-23 09:32:35

[ABSTRACT]

Recent trend of activation functions in Deep neural network (DNN) models have shifted from simple Rectified Linear Unit (ReLU) to more sophisticated alternatives such as Gaussian Error Linear Unit (GELU) and Sigmoid Linear Unit (SiLU), resulting intensive computations due to their complex compositions. This paper presents a novel hardware-software co-design framework that addresses the computational challenges posed by the increased complexity of activation functions in DNN models. 

A novel software algorithm approximates activation functions with piecewise linear (PWL) approximation incorporating non-uniform interpolation that characterizes interpolation interval with a set of unique bit patterns. Scalable, reprogrammable hardware architecture to accelerate the approximated activation function computation is proposed, supporting various floating-point data formats, offering adaptability to different deep learning applications. implementations while eliminating the subtractors necessitated in prior implementations. 

The framework significantly improves the approximation accuracy of activation functions, achieving an average 17.3× improvement in squared Average Absolute Error (sq-AAE) while offering speed improvements than cutting-edge accelerator Furthermore, the framework demonstrates robust performance across various DNN models, maintaining high accuracy.


비고: 학회 우수논문 선정