Research at VLab
VLab news:
- College of Engineering highlights Tegan Brennan's research.
- Convergence article on computer security research at UCSB.
- An interview with Tegan Brennan.
- Lucas Bang has successfully defended his disseration titled: "Software Side-Channel Analysis" and completed his Ph.D. He is now an assistant professor in the Department of Computer Science at Harvey Mudd College. Congratulations Lucas!
- Abdulbaki Aydin has successfully defended his disseration titled: "Automata-based Model Counting String Constraint Solver for Vulnerability Analysis" and completed his Ph.D. He joined Microsoft. Congratulations Baki!
- Ivan Bocic has successfully defended his disseration titled: "Data Model Verification via Theorem Proving" and completed his Ph.D. He joined Google. Congratulations Bo!
- Prof. Bultan received the UCSB Academic Senate Outstanding Graduate Mentor Award which is an annual campus-wide award that recognizes the contributions of faculty whose mentoring is considered exemplary. You can read the announcement for Prof. Bultan's Outstanding Graduate Mentor Award here and you can read an interview with him about this award here.
- Faculty profile interview with Prof. Bultan.
- Lucas Bang received the best presentation award at GSWC'16!
- We have a very exciting new project and looking for postdocs and PhD students to join VLab to work on it!
- A very nice graduate student profile on VLab member Ivan Bocic!
- VLab alumnus Muath Alkhalaf receives the 2015 ACM SIGSOFT Outstanding Doctoral Dissertation Award!!! We are very proud of Muath. This is a wonderful recognition of the outstanding contributions in his dissertation.
- VLab sweep at GSWC'14: Abdulbaki Aydin's paper is the best-paper and Ivan Bocic's paper is the best-paper runner-up.
- Our string analysis tool STRANGER was identified as the best string constraint solver in an emprical study.
- ACM SIGSOFT Distinguished paper award at ASE 2014
- New NSF grant to VLab is in the news:
Our research focuses on automated verification techniques and their application to software in order to improve software security and dependability. As computer systems become more pervasive, their dependability becomes increasingly important. As a result, there is an ongoing shift in focus, both in academia and industry, from performance to dependability and security. The size and complexity of the software systems nowadays inevitably lead to errors and vulnerabilities during both design and implementation phases. The goal of our research at VLab is to develop automated verification techniques that will help developers in identifying and removing errors and vulnerabilities in modern software systems and applications. Here are some of the research topics we work on: Quantitative Program Analysis and Model Counting, Security Policy Analysis for the Cloud, Side-channel analysis, Neural Network Verification, string analysis, data model verification, analyzing service oriented and distributed systems, design for verification, and infinite state verification.
Quantitative Program Analysis and Model Counting: Modern world is heavily dependent on software systems running on an increasingly large number of computing systems surrounding us. Cyber-attacks stealing secret information from these systems are becoming immensely dreadful to society. Moreover, the amount of software in safety critical systems such as cars and planes keep increasing. Software quality assurance has become one of the most fundamental problems that we are facing in this computing dominated era. It is extremely crucial to develop techniques that can improve the quality of software systems. Quantitative program analysis is an emerging area with its applications to software reliability, quantitative information flow analysis, side-channel detection and attack synthesis. Recently, symbolic quantitative program analysis techniques based on symbolic execution and model counting constraint solvers have been applied to two of the critical problems of software quality assurance and security: probabilistic property checking and quantitative information flow analysis. In this project, we explore two significant applications of symbolic quantitative program analysis techniques: 1) quantitative information flow and 2) quantitative assessment of testing difficulty. We develop techniques and frameworks to quantify information leakage for programs using symbolic execution and model counting constraint solvers. We also develop heuristic approach for probabilistic reachability analysis to identify hard to reach program statements toward assessing the difficulty of testing.
Security Policy Analysis for the Cloud:
Side-Channel Analysis: We are developing new analysis techniques and tools for the identification and verification of complexity-related and side-channel vulnerabilities for Java bytecode. Specifically, our work aims to develop scalable and effective techniques to identify vulnerabilities related to space and time resource usage behavior of Java based software systems. Our techniques are based on a novel combination of symbolic execution, model counting, and quantitative and qualitative analysis. This work is part of a multi-year collaborative research project with researchers from the Carnegie Mellon University, the Vanderbilt University and the Queen Mary University of London. Some recent research papers based on this project include: "Automata-based model counting for string constraints,” which was published in the Proceedings of the 27th International Conference on Computer Aided Verification (CAV 2015), "Automatically Computing Path Complexity of Programs,’’ which was published in the Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2015), "String Analysis for Side Channels with Segmented Oracles." which was published in the Proceedings of the 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016), and "Synthesis of Adaptive Side-Channel Attacks." which will appear in the Proceedings of the 2017 IEEE Computer Security Foundations Symposium (CSF 2017).
Neural Network Verification: Neural networks are being used in a vast number of applications, and thus it is necessary to find ways to verify their decision. We are working on quantitative verification, so that it is possible to not only determine if a robustness constraint is violated, but also how often. Specifically, we are working with quantized networks where the weights and biases are stored as low-precision fixed point values. These networks are good for applications where space and computing power is limited, but pose unique challenges to verification as opposed to their full-precision counterparts while it is possible to approximate floating point networks with real values without losing much accuracy, the lack of precision in quantized networks lends itself to much larger rounding errors which diverge from real-valued results. Our technique for verification uses symbolic execution and integer model counting to count the number of solutions to user-provided constraints, given a fully trained quantized network.
String Analysis: Web applications provide critical services over the Internet and frequently handle sensitive data. Unfortunately, web application development is error prone and results in applications that are vulnerable to attacks by malicious users. The global accessibility of critical web applications make this an extremely serious problem. Most critical web application vulnerabilities such as Cross Site Scripting (XSS), SQL command injection (SQLCI) and Malicious File Execution (MFE) are due to improper use and sanitization of strings. We have developed string analysis techniques that automatically 1) identify if a web application is vulnerable to attacks due to string manipulation errors, 2) if a web application is vulnerable, generate characterization of user inputs that might exploit that vulnerability (vulnerability signatures), and 3) generate patches that repair the vulnerability. Our string analysis approach is based on automata, where values that string expressions can take at each program point are characterized as deterministic finite automata (DFA). We developed symbolic forward and backward reachability analyses using DFA as a symbolic representation. We implemented our analysis approach in a tool called Stranger that analyzes PHP programs. More recently we extended our results on string analysis to relational string analysis, automated string abstractions, client-side string analysis (for JavaScript) and differential string analysis.
Data Model Verification: The growing influence of web applications in every aspect of society makes their dependability an immense concern. One positive advancement in web application development has been the adoption of the Model-View-Controller (MVC) pattern. Many popular web application development frameworks such as Ruby on Rails, Zend for PHP, CakePHP, Django for Python, and Spring for J2EE are based on the MVC pattern. A fundamental building block of web the MVC pattern is the data model, which specifies the object classes and the relations among them. The MVC pattern facilitates the separation of the data model (Model) from the user interface logic (View) and the control flow logic (Controller). The modularity and separation of concerns principles imposed by the MVC pattern provide opportunities for developing customized verification and analysis techniques. MVC-based frameworks use an object-relational mapping (ORM) to map the data representation of the web application to the back-end database. We developed an approach for automated verification of data models that 1) extracts a formal data model from the given ORM specification, 2) converts verification queries about the data model to queries about the satisfiability of formulas in a decidable theory, and 3) uses a decision procedure to check the satisfiability of the resulting formulas. We implemented this approach and applied it to open-source Rails applications, discovering data model errors in existing applications and demonstrating the feasibility of our approach.
Analyzing Service-Oriented and Distributed Systems: Service-oriented systems are distributed software systems that consist of web accessible software components (services) that interact with each other via messages sent over the Internet. A fundamental problem in developing reliable web services is analyzing their interactions. This a challenging problem due to the distributed nature of web services. We developed a framework for modeling, specifying and analyzing interactions of web services. We addressed some fundamental problems in this area such as realizability of top-down specifications and synchronizability of asynchronously communicating web services. We developed a tool called Web Service Analysis Tool (WSAT) which verifies properties about interactions of web services and checks sufficient conditions for realizability and synchronizability. Decidability of the realizability and synchronizability problems remained open for several years and recently we were able to give algorithms for solving these problems, proving their decidability. Our results in this domain turned out to be influential in other domains (for example for analyzing channel contracts in Singularity operating system.)
Design for Verification: A promising approach in attacking the dependability problem is to find ways of constructing software that facilitate automated verification. We developed a design for verification approach by introducing a set of design patterns which support a modular verification strategy. Our approach builds on the following principles: 1) use of stateful, behavioral interfaces which isolate the behavior and enable modular verification, 2) an assume-guarantee style verification strategy which separates verification of the behavior from the verification of the conformance to the interface specifications, 3) a general model checking technique for interface verification, and 4) domain specific and specialized verification techniques for behavior verification. So far, we applied this approach to verification of concurrent and distributed systems which are especially challenging to get right. The case studies we conducted show that, using our design for verification approach, it is possible to achieve scalable software verification in these application domains.
Infinite State Verification: One direction of research at VLab addresses fundamental limitations of automated verification which hinder its applicability to software. One such limitation is the fact that software systems have data-types which have arbitrarily large domains and well known theoretical results show that automated verification of such systems is impossible. Our research on verification techniques for infinite state systems lead to heuristic solutions to this problem which identify the errors that are encountered in practice. The main idea is to search an infinite state space using conservative approximations which over or under-approximate the behaviors of a system and can be used both to guarantee the absence of bugs (via over-approximation) and to identify the existing bugs (via under-approximation). We developed a tool called Action Language Verifier (ALV) which implements these heuristics and we applied ALV to verification of various software systems.