CSE Colloquium: Sparsity: Challenge or Opportunity?
Zoom Information: Join from PC, Mac, Linux, iOS or Android: https://psu.zoom.us/j/97407779067?pwd=amxValZSd1l4dHUwU01mMGZNNmVTUT09 Password: 973686
or iPhone one-tap (US Toll): +13017158592,97407779067#or +13126266799,97407779067#
or Telephone: Dial: +1 301 715 8592 (US Toll) +1 312 626 6799 (US Toll) +1 646 876 9923 (US Toll) +1 253 215 8782 (US Toll) +1 346 248 7799 (US Toll) +1 669 900 6833 (US Toll) Meeting ID: 974 0777 9067 Password: 973686 International numbers available: https://psu.zoom.us/u/ac1bROXx
ABSTRACT: Sparse problems – computer programs in which data lacks spatial locality in memory – are the main components in several crucial domains such as recommendation systems, computer vision, robotics, graph analytics, and scientific computing. Today, several computers and supercomputers containing millions of CPUs and GPUs are actively involved in executing sparse problems. However, even modern high-performance CPUs and GPUs are poorly suited to these sparse problems, utilizing only a tiny fraction of their peak performance. This occurs because of the contradiction between the abilities of the hardware and the nature of the sparse problems. Even the recent domain-specific architectures often target only the dense data structures hence resulting in the same performance degradation that CPUs and GPUs experience when executing sparse problems.
In this talk, I present my research that provides solutions to resolve four main challenges that prevent sparse problems from efficiently achieving high performance: computation underutilization, slow decompression, data dependencies, and irregular/inefficient memory accesses. In more detail, I focus on the last two challenges and illustrate how my research suggests converting mathematical dependencies into gate-level dependencies at the software level and exploiting dynamic partial reconfiguration at the hardware level, to execute sparse scientific problems more quickly than conventional architectures do. I also explain how my research deals with the sparseness of data by using an intelligent reduction tree near memory to process sparse data while transferring them – neither where data reside nor where dense computations occur.
BIOGRAPHY: Bahar Asgari is a Ph.D. candidate in the School of Electrical and Computer Engineering at Georgia Tech. Her doctoral dissertation, in consultation with her advisors Professor Sudhakar Yalamanchili and Professor Hyesoon Kim, focuses on efficiently improving the execution performance of sparse problems. Her proposed hardware accelerators and hardware/software co-optimization solutions that deal with essential challenges of sparse problems contribute to widespread application domains from machine learning to high-performance scientific computing. Besides her dissertation research, Bahar has conducted research in collaboration with other research scientists and faculty at Georgia Tech as she believes that collaboration is key to innovation. Bahar’s research and collaborative work have appeared at top-tier computer architecture conferences including HPCA, ASPLOS, DAC, DATE, IISWC, ICCD, and DSN as well as high-impact journals. Bahar has been selected to participate in Rising Stars 2019, an intensive academic career workshop for women in EECS.
Event Contact: Vijay Narayanan