CSE Colloquium: How to Deal with Heavy Computations in the Edge?
Zoom Information: Join from PC, Mac, Linux, iOS or Android: https://psu.zoom.us/j/9634142639 6?pwd=VitBMWtpVE0xMzhFYU1mT Gpvb01kdz09 Password: 731645
or iPhone one-tap (US Toll): +13126266799,96341426396 # or +16468769923,96341426396#
or Telephone: Dial: +1 312 626 6799 (US Toll) +1 646 876 9923 (US Toll) +1 301 715 8592 (US Toll) +1 346 248 7799 (US Toll) +1 669 900 6833 (US Toll) +1 253 215 8782 (US Toll) Meeting ID: 963 4142 6396 Password: 731645 International numbers available: https://psu.zoom.us/u/ad8XYAItEO
ABSTRACT: Edge systems constitute an increasingly vital segment of computing systems to which we entrust large portions of our daily lives. Edge systems--any compute agent but large-scale datacenter machines--are everywhere, from the Internet of Things (IoT) devices to autonomous vehicles and robots. One common feature in future edge systems is intelligence. Unlike traditional computing systems that are engineered with abundant resources and are under continuous monitoring, intelligent edge systems must operate within design constraints and under conditions that are quite different. For instance, any intelligent edge system must arbitrate between its limited resources while guaranteeing functionality and timely execution. The goal of my research is to enable efficient and effective edge systems using hardware-software synergy and targeting the most favorable points within their unique multi-dimensional design space.
In this talk, I mainly focus on two challenges of edge systems: their physical limitations with a study on quadcopter drones, and the barrier of heavy computations by investigating the execution of deep neural networks (DNNs) on collaborative robots and IoT devices. In drones, I formalize the fundamental drone subsystems and find how computations impact their design space while proposing an open-source customizable drone. In deep neural networks on edge, my key insight is that devices can break their individual resource constraints by distributing computations on collaborating peer devices. In my approach, edge devices cooperate to conduct single-batch inferences in real time while exploiting several model-parallelism methods. Additionally, to efficiently benefit from computing resources with low communication overhead, I present novel handcrafted and automatically generated models that consist of several independent and narrow branches, offering low communication overheads and high parallelization opportunities.
BIOGRAPHY: Ramyad Hadidi (https://ramyadhadidi.github.io/) is currently working toward a Ph.D. in computer science under the supervision of Professor Hyesoon Kim at the Georgia Institute of Technology. Ramyad’s research interests include but are not limited to computer architecture, robotics, edge computing, and machine learning. In his thesis, “Deploying Deep Neural Networks in Edge with Distribution”, individual edge devices break their resource constraints by distributing their computation on collaborating peer edge devices. Besides his dissertation research, Ramyad has contributed to research on processing-in-memory, GPU systems, and hardware accelerators for sparse problems, believing a balance between depth and breadth leads to genuine research problems.
Event Contact: Jack Sampson