PCC 2008 home page · PDF schedule · No abstracts
The workshop is in Newell Simon Hall, location 2 on this CMU Campus Map. All talks are in Newell Simon 1507. Breaks are in the Newell Simon Atrium and will be shared with the FCS-ARSPA-WITS and SecReT workshops.
Links: Local information; CMU campus walking map
0830–0930 | Registration |
0930–1030 | Static Analysis for Extended Byte Code Verification Thomas Jensen, IRISA/CNRS. Invited Talk |
1030–1100 | Break |
1100–1130 | Noninterference for Mobile Code with Dynamic Security
Domains Language-based information flow analysis is used to statically examine a program for information flows between objects of different security domains, and to verify these flows follow a given policy. When the program is distributed as mobile code and executed in different environments, the program may access external objects whose domains are not known until runtime. To maintain information flow security, runtime tests are required to determine which flows between accessed objects are actually allowed before operations inducing these flows are performed. We present an imperative language with heaps, domains as values and classes with restricted dependent types. We use a type system to statically prove that the flow tests included in a program are sufficient, such that a noninterference property for the data objects is ensured regardless of their domains. |
1130–1200 | Comparing Semantic and
Syntactic Methods in Mechanized Proof Frameworks We present a comparison of semantic and syntactic proof methods for reasoning about typed assembly languages in Coq. We make available our complete Coq developments for a simple and easily understood benchmark system presenting both styles of soundness proof to the same interface. The syntactic proof is standard subject reduction; the semantic proof uses Gödal Löb modal logic, shallowly embedded in Coq. The proof style of the modal logic is flexible and facilitates experimental modifications to the underlying machine. As an example of this flexibility, we discuss how to add fault tolerance to the list machine. In addition, we discus how the the choice of proof methodology affects the trusted computing base of a typed assembly language system. |
1200–1230 | Tight Enforcement of Flexible Information-Release Policies for
Dynamic Languages Information-flow tracking in web applications provides a viable, and increasingly popular, alternative for enforcing end-to-end confidentiality and integrity. However, there is an unsettling gap in the state of the art between formal, mostly static, approaches — that lack support for dynamic code evaluation — and practical, mostly dynamic, approaches — that lack soundness and support for flexible information-release policies. Seeking to bridge the gap, this talk proposes (i) an intuitive yet surprisingly general framework for rich information-release policies expressing both what can be released by an application and where in the code this release may take place and (ii) tight and modular enforcement by hybrid mechanisms that combine monitoring with on-the-fly static analysis for a language with dynamic code evaluation and communication primitives. The policy framework and enforcement mechanisms support both termination-sensitive and insensitive security policies. Joint work with Aslan Askarov. |
1230–1400 | Lunch |
1400–1500 | Modular Development of Certified System Software Certified software consists of a machine executable program plus a rigorous proof (checkable by computer) that the software is free of bugs with respect to a particular specification. Both the proof and the specification are written using a general-purpose mathematical logic, the same logic which all of us use (in reasoning) in our daily life. The logic is also a programming language: everything written in logic, including proofs and specifications, is developed using software known as a proof assistant; they can be mechanically checked for correctness by a small program known as a proof checker. As long as the logic we use is consistent, and the specification describes what the user wants, we can be sure that the underlying software is free of bugs with respect to the specification. The conventional wisdom is that certified software will never be practical because any real software must also rely on the underlying operating system which is too low-level and complex to be verifiable. In recent years, however, there have been many advances in the theory and engineering of mechanized proof systems applied to verification of low-level code, including proof-carrying code, certified assembly programming, logic-based type system, and certified or certifying compilation. In this talk, I will give an overview of this exciting new area, focusing on key insights and high-level ideas that make the work on certified software differ from traditional style program verification. I will also describe several recent work — done by my group at Yale — on building certified thread implementation, stack-based control libraries, garbage collectors, and OS bootloader. |
1500–1530 | A Practical PCC Framework for Java Bytecode |
1530–1600 | Break |
1600–1630 | Reasoning About Faulty Programs When implementing cryptographic algorithms, security protocols, long-running simulations, and safety critical systems, software developers must consider the possibility that intermittent hardware faults will cause irreparable harm to their application. For example, Boneh, DeMillo and Lipton showed that a single fault could be exploited to break common implementations of RSA. To support verification of such applications, we develop a new logic for reasoning about faulty programs. Our program logic operates over a standard language of imperative while programs augmented with a single new instruction to indicate the possibility of faults at particular program points. The logic itself is a relative of Separation Logic in which the separating conjunction is used to reason about the limits to which faults may affect different parts of program state. We prove the logic is sound and illustrate how to use it on several simple examples. We also show how the introduction of a modal connective helps to simplify program assertions and facilitate the reasoning process. |
1630–1700 | AURA: A Programming Language for Authorization and
Audit Authorization logics provide a principled and flexible approach to specifying access control policies. Furthermore, proofs of propositions in the authorization logic can act as capabilities that provide the reference monitor with evidence that a given request should be granted. The proof-carrying authorization framework, proposed by Appel and Felten, requires the clients requesting access to provide a proof validating the authorization decision. Taking advantage of the Curry-Howard isomorphism, Abadi recently interpreted DCC (Dependency Core Calculus) as a language for authorization. In this talk, we present AURA, a programming language for access control that treats ordinary programming constructs (e.g., integers and recursive functions) and authorization logic constructs (e.g., principals and access control policies) in a uniform way. AURA is based on polymorphic DCC and uses dependent types to permit assertions that refer directly to AURA values while keeping computation out of the assertion level to ensure tractability. AURA's program model suggests an approach to protect resources that is similar to the proof-carrying authorization framework: Untrusted, but well-typed, applications that access resources through an appropriate interface must obey the access control policy and create proofs useful for audit. One of the compelling benefits of explicitly requiring a proof is that a proof is evidence that an access-control decision has been made in accordance with policy. Such proofs, when used for auditing, provide valuable information for detecting flaws in complex authorization policies. |
Page last modified: Monday 16 June 2008