DARPA ISO Sponsored Research

2000 Project Summary
Active Trust Management fro Autonomous Adaptive Survivable Systems
Massachusetts Institute of Technology

Project Website:  http://www.ai.mit.edu/projects/its/index.html
Quad Chart:  Link to = Quad Chart   provided by the performing organization
Objective: Our project aims to build Adaptive Survivable Systems that are capable of performing their intended function even when underlying computational resources have been successfully compromised.  In particular, we wish to build systems that model the trustworthiness of computational resources and that make rational choices about how best to achieve their goals in light of the risks and benefits involved in using alternative computational resources.
Approach: Our project will focus on four major topics:
1.  Trust Models: An Adaptive Survivable System must know what resources are trustable and for what purposes they may be trusted.  This in turn depends on what components have been compromised and on the form of the compromise.  Finally, this depends on what attacks have been conducted, which have succeeded, and with what intent they have been conducted.  Our trust model will therefore have three levels, each with its own ontology and inference techniques.  The Trustability level will center on properties of significance to applications (e.g. privacy, quality of service).  The compromise level will focus on computational components that provide these properties and on the ways in which they may be compomised.  The attack level will focus on the types of attacks and on how they enable compromise of critical resources.

2. Perpetual Analytic Monitoring:  The trust model is constructed and kept current by constant monitoring of information streams arising from multiple soruces such as intrusion detection systems and the self-monitoring of application systems.  We collate and analyze these reports, looking for temporal trends that are indicative of coordinated attacks or of particular compromises.  Thus, our goal is not as much to spot attacks and to assesss the degree of compromise already present.  This part of our effort will be based on our MAITA monitoring system.

3. Self-Adaptive Survivable Systems: Trust models influence the way a Self-Adaptivie system attempts to perform its computation.  Self-Adaptive systems are structured so that each sub-task has many methods available for achieving its goal.  Each of these methods requires specific types of resources and each of these resources is assessed for its trustworthiness; each method also promises a certain quality of answer.   A self-adaptive system makes the rational choice of using that method which is most likely to achieve maximum net benefit.  Self-adaptive systems also inform the trust model.
The goals and invariants of each computation are explicitly represented and checked as the computation proceeds.  The failure of a computation to behave as expected provides evidence that the resources used by that computation have been compromised.   This evidence is reported to the monitoring system and is used to help assess the degree of compromise and the trustworhiness of the resourrces.

4. Rational, Trust Driven Resource Allocation.  Trend detection, self-monitoring and trust assessment all consume resources which might otherwise be used by applications to perform their critical services.  Dedicating too many resources to house-keeping functions would prevent the applications from rendering their functions (i.e. a self-inflicted denial of service); dedicating too few resources to the house-keeping functions necessary for an accurate trust-model can lead to the use of compromised resources in tasks for which they are not trustworthy.  Similarly, application systems themselves constantly make decisions about how to achieve their goals and which resources to use.  Each of these decisions can be viewed as a rational decision making problem, that is assessing how best to achieve maximum expected net benefit, given the trustability of the resources, the political situation and the likelihood of coordinated, malicious intention.

Recent FY-00 Accomplishments: This project began on July 1, 2000. 
FY-01 Plans: 1.  We plan to constructi a preliminary ontology underlying the trust model and to distribute it for discussion with other projects in the program.

2. We plan to enhance our MAITA monitoring system to understand the information provided by a variety of intrusion detection systems and by self-monitoring applications.  We also plan to construct a library of "trend-templates" that describe the temporal pattern of behavior that characterize successful attacks and compromises.

3. We plan to develop techniques for instrumenting an application system so that it checks its own progress towards achieving its goals and generates reports in the event of failure.

4. We plan to develop initial models for rational resource management that take into account the information in the trust model.

Technology Transition: We plan to construct a testbed that illustrates our techniques in the context of the AI Lab's Intelligent Room, a component of the joint LCS/AI Lab Project Oxygen which sponsored by DARPA and a consortium of commercial partners.  The testbed system will be an distributed agent system, running on an ensemble of several computers.  We will freely share our experiences with other projects in this program, we will publish reports, and we will demonstrate our techniques to DARPA and our sponsoring partners.
Principal Investigator: Howard Shrobe
MIT AI Laboratory
NE43-839
Massachusetts Institute of Technology
Cambridge MA, 02139
Phone: 617-253-7877
Fax: 617-253-5060
email: hes@ai.mit.edu

Admin Contact Name:Robert Van De Pitt

E-mail: vandepit@mit.edu
Organization: MIT Office of Sponsored Programs
Address:Room E19-750
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, Mass. 02139
Phone(617) 253-3884
Fax: (617) 253-4734
email: vandepit@mit.edu