SoSL Projects 2011-2014
The following projects were funded during the Lablet’s first phase of funding (2011-2014).
Round 1 Projects
-
Attaining Least Privilege Through Automatic Partitioning of Hybrid Programs, William Enck & Xiaohui (Helen) Gu, PIs. Students: Adwait Nadkami, Tsung-Hsuan (Anson) Ho, Ashwin Shashidharan
This project investigates the hard problem of resilient architectures from the standpoint of enabling new potential for incorporating privilege separation into computing systems. However, privilege separation alone is insufficient to achieve strong security guarantees. It must also include a security policy for separated components without impacting the functional requirements of the system. The general hypothesis of this project is that, legacy computing systems contain emergent properties that allow automatic software partitioning for privilege separation capable of supporting practical least privilege security policies.
-
Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance, Chris Mayhorn & Emerson Murphy-Hill, PIs. Students: Kyung Wha Hong, Chris Kelley
Phishing has become a serious threat in the past several years, and combating it is increasingly important. Why do certain people get phished and others do not? In this project, we aim to identify the factors that cause people to be susceptible and resistant to phishing attacks. In doing so, we aim to deploy adaptive anti-phishing measures.
The objective of this project is to design empirical privacy metrics that are independent of existing privacy models to naturally reflect the privacy offered by anonymization. We propose to model privacy attacks as an inference process and develop an inference framework over anonymized data (independent of specific privacy objects and techniques for data anonymization) where machine-learning techniques can be integrated to implement various attacks. The privacy metrics is then defined as the accuracy of the inference of individuals’ sensitive attributes. Data utility is modeled as a data aggregation process and thus can be measured in terms of accuracy of aggregate query answering. Our hypothesis is that, given the above empirical privacy and utility metrics, differential privacy based anonymization techniques offers a better privacy/utility tradeoff, when appropriate parameters are set. In particular, it is possible to improve utility greatly while imposing limited impact on privacy.
-
Software Security Metrics, Tao Xie, Laurie Williams, & Ehab S. Al-Shaer (UNC-Charlotte), PIs. Students: Jason King, Rahul Pandita, Mahamed Alsaleh
Software security metrics are commonly considered as one critical component of science of security. We propose to investigate existing metrics and new security metrics to predict which code locations are likely to contain vulnerabilities. In particular, we will investigate security metrics to take into account of comprehensive factors such as software internal attributes, developers who develop the software, attackers who attack the software, and users who use the software. The project also investigates metrics to evaluate firewall security objectively. The developed metrics including risk, usability and cost are used to automate the creation of security architecture and configurations.
-
Improving the Usability of Security Requirements by Software Developers through Empirical Studies and Analysis, Travis Breaux (CMU), Laurie Williams, & Jianwei Niu (CMU), PIs. Student: Maria Riaz
This project aims to discover general theory to explain what cues security experts use to decide when to apply security requirements and how to present those cues in the form of security patterns to novice designers in a way that yields improved security designs.
- Empirical Privacy and Empirical Utility of Anonymized Data, Ting Yu, PI. Students: Xi Gong, Entong Shen
-
Shared Perceptual Visualizations For System Security, Christopher G. Healey, PI. Student: Terry Rogers
We are studying how to harness human visual perception in information display, with a specific focus on ways to combine layers of data in a common, well-understood display framework. Our visualization techniques are designed to present data in ways that are efficient and effective, allowing an analyst to explore large amounts of data rapidly and accurately.
-
An Adoption Theory of Secure Software Development Tools, Emerson Murphy-Hill, PI. (Note: this was initially a seedling project, but as of Funding Round 3 is now a full project of the same name.)
- Quantifying Underpinnings for Network Analytics as Components of Composable Security, Rudra Dutta, PI. (Note: This was oridinally a seedling project, but as of Funding Round 2 is now a full project called “Studying Latency and Stability of Closed-Loop Sensing-Based Security Systems.”)
Round 2 Projects
-
Argumentation as a Basis for Reasoning about Security, Munindar P. Singh, Simon D. Parsons (CUNY), PIs. Student: Nirav Ajmeri
This project involves the application of argumentation techniques for reasoning about policies, and security decisions in particular. Specifically, we are producing a security-enhanced argumentation framework that (a) provides not only inferences to draw but also actions to take; (b) considers multiparty argumentation; (c) measures the mass of evidence on both attacking and supporting arguments in order to derive a defensible conclusion with confidence; and (d) develops suitable critical questions as the basis for argumentation. The end result would be a tool that helps system administrators and other stakeholders capture and reason about their rationales as a way of ensuring that they make sound decisions regarding policies.
-
An Investigation of Scientific Principles Involved in Software Security Engineering, Mladen Vouk, Laurie Williams, Jeffrey Carver, PIs. Student: Patrick Morrison
Fault elimination part of software security engineering hinges on pro-active detection of potential vulnerabilities during software development stages. This project is currently working on a) an attack operational profile definition based on known software vulnerability classifications, and b) assessment of software testing strategies based on two assumptions a) funding and time constraint are a practical limit on the quality of security engineering (how to assess and leverage that), and b) how to automatically generate test cases that would be as efficient as human non-operational testing of software.
-
Quantifying Mobile Malware Threats, Xuxian Jiang, PI. Student: Yajin Zhou
In this project, we aim to systematize the knowledge base about existing mobile malware (especially on Android) and quantify their threats so that we can develop principled solutions to provably determine their presence or absence in existing marketplaces. The hypothesis is that there exist certain fundamental commonalities among existing mobile malware. Accordingly, we propose a mobile malware genome project called MalGenome with a large collection of mobile malware samples. Based on the collection, we can then precisely systematize their fundamental commonalities (in terms of violated security properties and behaviors) and quantify their possible threats on mobile devices. After that, we can develop principled solutions to scalably and accurately determine their presence in existing marketplaces. Moreover, to predict or uncover unknown (or zero-day) malware, we can also leverage the systematized knowledge base to generate an empirical prediction model. This model can also be rigorously and thoroughly evaluated for its repeatability and accuracy.
-
Towards a Scientific Basis for User Centric Security Design, Ting Yu, Ninghui Li (Purdue), Robert Proctor (Purdue), PIs. Student: Zach Jorgensen
Human interaction is an integral part of any system. Users have daily interactions with a system and make many decisions that affect the overall state of security. The fallibility of users has been shown but there is little research focused on the fundamental principles to optimize the usability of security mechanisms. We plan to develop a framework to design, develop and evaluate user interaction in a security context. We will (a) examine current security mechanisms and develop basic principles which can influence security interface design; (b) introduce new paradigms for security interfaces that utilize those principles; (c) design new human-centric security mechanisms for several problem areas to illustrate the paradigms; and (d) conduct repeatable human subject experiments to evaluate and refine the principles and paradigms developed in this research.
-
Spatiotemporal Security Analytics and Human Cognition, David Roberts, PI. Student: Titus Barik
A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.
-
Studying Latency and Stability of Closed-Loop Sensing-Based Security Systems, Rudra Dutta, Meeko Oishi (UNM-Albuquerque), PIs. Student: Trisha Biswas
In this project, our focus is on understanding a class of security systems in analytical terms at a certain level of abstraction. Specifically, the systems we intend to look at are (I) multipath routing (for increasing reliability), (ii) dynamic firewalls. For multipath routing, the threat scenario is jamming - the nodes that are disabled due to the jamming take the place of compromised components in that they fail to perform their proper function. The multipath and diverse path mechanisms are intended to allow the system to perform its overall function (critical message delivery) despite this. The project will focus on quantifying and bounding this ability to function redundantly. For the firewall, the compromise consists of an attacker guessing at the firewall rules and being able to circumvent them. The system is designed to withstand this by dynamically changing the ruleset to be applied over time. Our project will focus on quantifying or characterizing this ability.
-
A Science of Timing Channels in Modern Cloud Environments, Michael Reiter (UNC), PI. Students: Yinqian Zhang, Peng Li.
The eventual goal of our research is to develop a principled design for comprehensively mitigating access-driven timing channels in modern compute clouds, particularly of the “infrastructure as a service” (IaaS) variety. This type of cloud permits the cloud customer to deploy arbitrary guest virtual machines (VMs) to the cloud. The security of the cloud-resident guest VMs depends on the virtual machine monitor (VMM), e.g., Xen, to adequately isolate guest VMs from one another. While modern VMMs are designed to logically isolate guest VMs, there remains the possibility of timing “side channels” that permit one guest VM to learn information about another guest VM simply by observing features that reflect the others’ effects on the hardware platform. Such attacks are sometimes referred to as “access-driven” timing attacks.
-
Normative Trust Toward a Principled Basis for Enabling Trustworthy Decision Making, Munindar Singh, PI. Student: Anup Kalia
This project seeks to develop a deeper understanding of trust than is supported by current methods, which largely disregard the underlying relationships based on which people trust or not trust each other. Accordingly, we begin from the notion of what we term normative relationships – or norms for short – directed from one principal to another. An example of a normative relationship is a commitment: is the first principal committed to doing something for the second principal? (The other main types of normative relationships are authorizations, prohibitions, powers, and sanctions.) Our broad research hypothesis is that trust can be modeled in terms of the relevant norms being satisfied or violated. To demonstrate the viability of this approach, we are mining commitments from emails (drawn from the well-known Enron dataset) and using them to assess trust. Preliminary results indicate that our methods can effectively estimate the trust-judgment profiles of human subjects.
Round 3 Projects
-
Low-level Analytics Models of Cognition for Novel Security Proofs, David Roberts, Robert St. Amant, PIs. Students: Titus Barik, Arpan Chakraborty, Brent Harrison
A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.
-
An Adoption Theory of Secure Software Development Tools, Emerson Murphy-Hill, PI. Student: Jim Witschey
Programmers interact with a variety of tools that help them do their jobs, from “undo” to FindBugs’ security warnings to entire development environments. However, programmers typically know about only a small subset of tools that are available, even when many of those tools might be valuable to them. In this project, we investigate how and why software developers find out about – and don’t find out about – software security tools. The goal of the project is to help developers use more relevant security tools, more often.
-
Modeling the risk of user behavior on mobile devices, Benjamin Watson, Will Enck, Anne McLaughlin, Michael Rappa, PIs.
It is already true that the majority of users’ computing experience is a mobile one. Unfortunately that mobile experience is also more risky: users are often multitasking, hurrying or uncomfortable, leading them to make poor decisions. Our goal is to use mobile sensors to predict when users are distracted in these ways, and likely to behave insecurely. We will study this possibility in a series of lab and field experiments.
-
Understanding the Fundamental Limits in Passive Inference of Wireless Channel Characteristics, Huaiyu Dai, Peng Ning, PIs. Student: Xiaofan He
It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we focus on establishing correct modeling of channel correlation in wireless environments of interest, and properly evaluating the safety distance metric of existing and emerging wireless security mechanisms, as well as cyber-physical systems employing these security mechanisms. Upon successful completion of the project, the expected outcome will allow us to accurately determine key system parameters (e.g., the security zone for secrete key establishment from wireless channels) and confidently assess the security assurance in wireless security mechanisms. More importantly, the results will correct the previous misconception of channel de-correlation, and help security researchers develop new wireless security mechanisms based on a proven scientific foundation.
-
An Investigation of Scientific Principles Involved in Attack-Tolerant Software, Mladen Vouk, PI. Student: Da Young Lee
High-assurance systems, for which security is especially critical, should be designed to a) auto-detect attacks (even when correlated); b) isolate or interfere with the activities of a potential or actual attack; and (3) recover a secure state and continue, or fail safely. Fault-tolerant (FT) systems use forward or backward recovery to continue normal operation despite the presence of hardware or software failures. Similarly, an attack-tolerant (AT) system would recognize security anomalies, possibly identify user “intent”, and effect an appropriate defense and/or isolation. Some of the underlying questions in this context are. How is a security anomaly different from a “normal” anomaly, and how does one reliably recognize it? How does one recognize user intent? How does one deal with security failure-correlation issues? What is the appropriate safe response to potential security anomaly detection? The key hypothesis is that all security attacks always produce an anomalous state signature that is detectable at run-time, given enough of appropriate system, environment, and application provenance information. If that is true (and we plan to test that), then fault-tolerance technology (existing or newly develop) may be used with success to prevent or mitigate a security attack. A range of AT technologies will be reviewed, developed and assessed.