CSE Colloquia: High-performance, Cross-platform Vetting of the Closed-source Software Ecosystem
Abstract:
The root cause of over 90% of cyberattacks is security vulnerabilities. At 25 bugs for every 1,000 lines of code, the discovery of security flaws is vital to mitigating cyberattacks. The go-to method of vulnerability discovery is the developer-derived test case: a developer encodes their understanding of program behavior in a set of test cases, then executes those test cases to verify that the program behaves as expected. Unfortunately, the programmer’s mental model of the program is often incomplete and over-constrained. These limitations cause programmers to miss many security vulnerabilities that stem from seemingly impossible test cases. Coverage-guided mutational fuzz testing (i.e., fuzzing) fills in the gaps in testing that developers leave by being underconstrained, i.e., testing with an “anything is possible” mindset. Being underconstrained will find vulnerabilities that developers miss but requires throwing millions of test cases at the program, as being underconstrained means that most test cases will be uninteresting from a program behavior perspective. This results in test case execution rate as the critical metric of fuzzing effectiveness.
In this talk, I will tell you about my work on increasing the test case execution rate of fuzzing. The central observation of my work is that fuzzers spend over 95% of their time executing a test case that will eventually be discarded as uninteresting. Leveraging this observation, I build a fuzzer that encodes the frontier of test case exploration into the program binary so programs self-report when a test case is interesting. The fuzzer then spends the effort on monitoring the coverage of only the 1/10,000 test cases that prove interesting. I call this Coverage-Guided Tracing (CGT). CGT removes the overhead of monitoring code coverage of every executed test case and improves performance by over 600%. On top of CGT, I add support for the most common code coverage metrics shown to increase fuzzing effectiveness, namely edge coverage and hit count coverage---without sacrificing performance. I will conclude the talk with a look at the current fuzzing work underway in my lab that focuses on increasing the fuzzing's performance on Windows and how to better leverage program source code for higher-performance fuzzing.
Bio:
Matthew Hicks is an Associate Professor of Computer Science at Virginia Tech. His research focus is securing computer software and hardware from attacks during its design and implementation/fabrication, including side channels. Dr. Hicks has a special interest in the automated discovery of security vulnerabilities, generally through a technique called fuzzing, which combines underconstrained test cases and feedback-driven mutation to fill-in the gaps left by traditional testing. His 2016 A2 attack paper won the 2016 IEEE Symposium of Security and Privacy (Oakland) Best Paper award. In 2018, he was recognized as a DARPA Riser, followed by a DARPA Young Faculty Award in 2019, and the DARPA Director’s Fellowship in 2021. In 2020, his defense against foundry-level hardware attacks was given the R&D 100 award. Then in 2021, he was awarded the NSF CAREER Award for his work in hardware security and recognized as Virginia Tech College of Engineering’s Outstanding New Assistant Professor.
Prior to Virginia Tech, Dr. Hicks was a member of the technical staff at MIT Lincoln Laboratory, where he led several hardware security projects. He was a lecturer and postdoc at the University of Michigan focusing on the intersection of computer architecture and computer security. He holds a PhD and MS from the University of Illinois at Urbana-Champaign and a BS from the University of Central Florida.
Event Contact: Timothy Zhu