High Performance Zero-Memory Overhead Direct Convolutions

Tuesday October 2, 2018
Location: CIC 4th floor Panther Hollow Conference Room
Time: 4:30PM-5:30PM


The computation of convolution layers in deep neural networks typically rely on high performance routines that trade space for time by using additional memory(either for packing purposes or required as part of the algorithm) to improve performance. The problems with such an approach are two-fold. First, these routines incur additional memory overhead which reduces the overall size of the network that can fit on embedded devices with limited memory capacity. Second, these high performance routines were not optimized for performing convolution, which means that the performance obtained is usually less than conventionally expected. In this paper, we demonstrate that direct convolution, when implemented correctly, eliminates all memory overhead, and yields performance that is between 10% to 400% times better than existing high performance implementations of convolution layers on conventional and embedded CPU architectures. We also show that a high performance direct convolution exhibits better scaling performance, i.e. suffers less performance drop, when increasing the number of threads


Jiyuan Zhang a fifth-year PhD student at CMU ECE department, working with professor Franz Franchetti. Her research interest lies in high performance computing and computer architectures.