This file is available on the Cryptome DVD offered by Cryptome. Cryptome DVD. Donate $25 for a DVD of the Cryptome 10-year archives of 35,000 files from June 1996 to June 2006 (~3.5 GB). Click Paypal or mail check/MO made out to John Young, 251 West 89th Street, New York, NY 10024. Archives include all files of cryptome.org, cryptome2.org, jya.com, cartome.org, eyeball-series.org and iraq-kill-maim.org. Cryptome INSCOM DVD. Cryptome offers with the Cryptome DVD an INSCOM DVD of about 18,000 pages of counter-intelligence dossiers declassified by the US Army Information and Security Command, dating from 1945 to 1985. No additional contribution required -- $25 for both. The DVDs will be sent anywhere worldwide without extra cost.

26 September 2006

http://www.defenselink.mil/Contracts/Contract.aspx?ContractID=3345

CONTRACTS from the United States Department of Defense

September 25, 2006

BOOZ Allen Hamilton Inc., McLean, Va., is being awarded a $6,613,354 cost-plus-fixed fee contract.This contract is associated with the "TANGRAM," a fully automated, continuously operating, and intelligence analysis support system.The purpose of this effort is to provide information understanding for better and faster access to quality intelligence information through systematic performance evaluations to ascertain the value of the TANGRAM system under development.This demonstration will occur four times during the course of these effort and monthly status and final technical reports will be delivered.The contractor performing this effort will cooperate with other contractors in this program in order to share information and improve the overall output of the program.At this time, $377,172 have been obligated.Solicitations began in September 2005 and negotiations were complete September 2006.This work will be complete September 2010.Air Force Research Laboratory, Rome, N.Y., is the contracting activity.(FA8750-06-C-0208)

http://www1.fbo.gov/EPSData/USAF/Synopses/1142/Reference%2DNumber%2DBAA%2D06%2D04%2DIFKA/pip%2Edoc

TANGRAM

Proposer's Information Packet (PIP)

INTRODUCTION

The Intelligence Community and the Air Force routinely sponsor classified and unclassified research to advance the state of the art in intelligence and analytic support systems. The Advanced Research and Development Activity (ARDA) has sponsored several of these research programs. The Advanced Question Answering for Intelligence (AQUAINT) Program has focused on human-machine question and answering type solutions and seeks to improve the intelligence value of information retrieval systems through human-computer dialogue. The Novel Information for Massive Data (NIMD) program has sought to capitalize on the cognitive human-computer interface by learning how analysts think and operate in a computer enhanced analytic environment. The Air Force Topsail program has focused on structured analysis and collaboration enabling technology to support process and reporting of analysis.

The Evidence Assessment, Grouping. Linking and Evaluation (EAGLE), Knowledge Discovery and Dissemination (KD-D), and other Government programs have all focused on developing systems, tools and algorithms to detect international terrorist activities and planned events. In total, these programs resulted in the development and evaluation of several methods of detecting and searching for patterns of terrorist behaviors, representing these behaviors in human and machine interpretable form and efficiently searching large data stores for evidence of known behaviors.

While pattern-based representation and search techniques have proven to be a productive method for detection, it is only a reasonable approach when we have prior evidence to support the formulation of the pattern. EAGLE, in particular, developed novel algorithms and methods for linking entities and activities using a guilt-by-association model. By relying on highly accurate and analyst-vetted knowledge about known terrorists, groups, affiliations and activities, these tools and methods have proven to be very effective at tracking terrorist suspects and detecting their threat event intentions.

EMPHASIS

The EAGLE program concludes in calendar year 2005, and despite its successes, several fundamental challenges remain before the technology can be deployed broadly within the Intelligence Community. Significant effort has been invested in improving the performance and scalability of the most promising algorithms, often resulting in two orders of magnitude increase in scalability and four orders of magnitude speed-up in compute time. Currently, the total production time for a single answer is measured in days and weeks. Yet, to have any demonstrable improvement in the intelligence process we need to provide answers in hours or minutes. The four key challenges that define the essence of the Tangram program are:



Reduce system and data configuration time of all automated entity and threat discovery processes by two orders of magnitude (100 x).

Reduce threat entity and event discovery time by two orders of magnitude (100 x).

Increase overall efficiency by three orders of magnitude and overall productivity by two orders of magnitude over current processes while delivering a consistently high intelligence value as determined by experienced analysts.

Improve the detection of low observable threats and events where guilt by association assumptions may not apply.

The fundamental technologies developed under the previous programs have been evaluated through peer review and technical evaluations. There is no debate that these technologies are capable of filling an important void in the Intelligence Community's analytic arsenal. However, to achieve their potential, we must develop the processes, procedures and standards to deploy a fully automated, analytic system capable of processing tens of thousands of simultaneous analytic inquiries in an efficient and scalable manner.

To achieve this, the data, algorithms, and computing resources must be flexible and adaptable. Essentially, the system must be self composing because of the variety and unpredictability of the questions to be answered.

Tangram is unique in that it takes a systematic view of the process; applying what is now a set of disjointed, cumbersome to configure technologies that are difficult for non-technical users to apply, into a self-configuring, continuously operating intelligence analysis support system.

GOALS

Tangram is envisioned as a fully automated, continuously operating, intelligence analysis support system that's capable of configuring itself to achieve a reasonable tradeoff between estimated intelligence value and cost, where



Analysis Support means the production of hypotheses representing an adversary's intentions, methods, logistics support or intended targets based on: 1) behavioral patterns, relationships or context that are expressed only in data, and 2) analyst feedback about prior hypotheses.

Configuring itself means the system is aware of the data characteristics, algorithm capabilities and requirements, and hardware resources such that it can reason about how best to produce an answer.

Reasonable means good is enough, optimal is not required.

Intelligence Value is a subjective score or estimate of the intelligence utility of a hypotheses produced by the system.

Cost is the effort required to produce an answer and the opportunity cost of not producing a higher intelligence value product using the same system resources.

ANTICIPATED PROGRAM PROFILE

The Government provides the table below as general guidance about the funding profile, numbers of awards, contract duration and security requirements.

Expected No. Of Awards Anticipated Award per year Award Period (months) Teaming Anticipated Security Clearances Expected for Key Performers SYSTEM RESEARCH 2 (max) 24 mo. (06-08) Downselect 1 25-48 mo. (08  10) > $3.0M 48 Yes Yes COMPONENT RESEARCH Algorithm Characterization 1 $0.5M - $1M 24 Yes No Graph Unification 1 $0.25M - $2M 24 Yes No Data Characterization 1 $0.25M - $2M 24 Yes Yes GAP FILLING RESEARCH 4-10 $0.15M - $0.75M 48 No Preferred SYSTEM EVALUATION ARCHITECTURE (SEA) 1 $0.5M - $1M 48 Yes Yes

SYSTEM RESEARCH, COMPONENT RESEARCH, and SEA TECHNICAL OBJECTIVES

Below are brief descriptions of the goals of the System Research, Component Research and System Evaluation Architecture research tasks. Offerors are encouraged to identify their own important technical ideas for their proposed work.

System Research

Goal: Develop a composable underlying system prototype architecture within which the Component and Gap Filling Research can be integrated. This architecture should ultimately be able to incorporate the handling of human feedback as a data source. The underlying hardware may include standard and "novel" architectures. Note that the program will not be considered successful until the technology is transitioned in whole or in significant part to the Intelligence Community.

System Evaluation Architecture (SEA)

Goal: Conduct all evaluations of system prototypes developed by the System Research offerors. Prototype evaluations can be conducted using the as-built software/hardware installations at the contractor's sites using remote VPN access. Consider the software and evaluation harnesses necessary to evaluate the prototypes at the end of the 2nd year and collect data and algorithms to be used for the evaluation. With the introduction of human feedback from the System Research developers, the SEA offeror should be able to evaluate the intelligence value and cost based workflow planning functionality of Tangram. The final evaluation will be an exhaustive functional and systematic exercise of the complete Tangram intelligence support system by the offeror. Ideally this would be an evaluation using classified data and real world problems and may require additional data sets, algorithms, and hardware, including multiple cycles of human feedback from multiple analysts over an extended period of time.

Component Research

Algorithm Characterization

Goal: Develop an Algorithm Description Language Specification and implement this specification for the algorithm set of the offeror's choice. Consider: 1) the ability to discriminate between algorithms for known data characteristics, 2) the ability to support sequencing of algorithms for a specified analytic product, and 3) the ability to support streaming or batch oriented work flows. The algorithm descriptions will be made available to the SEA contractor for evaluation and the System Research contractor(s) for downstream use.

Graph Unification

Goal: Develop the capabilities to support graph editing, visualization and annotation by a set of predetermined applications. Consider the data exchange interfaces for the applications so that all Tangram hypotheses or graphs can be ingested by these third party applications and their outputs can be ingested by Tangram. A critical part of this implementation will be the potential change to the original hypothesis exchange specification to accommodate incremental processing or incremental changes to hypotheses in lieu of wholesale replacement of versions of prior hypotheses. Consider an intercomponent hypothesis exchange specification and common graph representation specification. Implement the specifications on the algorithm set of the offeror's choice and provide the software for testing to the SEA contractor.

Data Characterization

Goal: Develop a Data Characterization Process Description to describe the characteristics of the data that will be used as input to the Tangram System. Develop any data enrichment services and/or data transformation services necessary to shred the data from one input format to another for use by each of the prototype developers. Consider the impacts of multiple data source integration and data enrichment on the data characterization and transformation capabilities and develop the data transformation software required to meet the multi-source data integration and enrichment requirements of the program and deliver this capability to the prototype developers (i.e., System Research).

Program Reviews - Year 1

Kickoff PI Meeting - Year 15 day planning and coordination meeting in the Baltimore Washington Metropolitan area.

Mid-Year PI Meeting - Year 15 day program progress review, planning and coordination working meeting in Florida.Site Visit - Year 11-2 day Government site visit to contractor's facility for contract performance review

Program Reviews - Year 2

Initial PI Meeting - Year 25 day planning and coordination meeting in the Mountain States.

Mid-Year PI Meeting - Year 25 day program progress review, planning and coordination working meeting in Southwest US.Site Visit - Year 21-2 day Government site visit to contractor's facility for contract performance review

Program Reviews - Year 3

Initial PI Meeting - Year 35 day planning and coordination meeting in the Baltimore Washington Metropolitan area.

Mid-Year PI Meeting - Year 35 day program progress review, planning and coordination working meeting in Southwest US.Site Visit - Year 31-2 day Government site visit to contractor's facility for contract performance review

Program Reviews - Year 4

Initial PI Meeting - Year 45 day planning and coordination meeting in the Midwest US.

Mid-Year PI Meeting - Year 45 day program progress review, planning and coordination working meeting in Florida.

End of Program PI Meeting2 day planning and coordination meeting in the Baltimore Washington Metropolitan area.Site Visit - Year 41-2 day Government site visit to contractor's facility for contract performance review

The following figure illustrates how the multiple technical objectives in these three thrust areas contribute to the overall objectives of the program. It also provides some insight into the expected interactions amongst the various participants in the program. Offerors are advised to note that the figure is intended to provide additional clarity to them and is not intended to serve as a comprehensive list of interactions or objectives.





GAP FILLING RESEARCH - AREAS OF INTEREST

The expectations of the Government in this technical thrust area are far more exploratory and open than the rest of Tangram. The investments we intend to make will fill recognized gaps in our existing knowledge discovery arsenal. While these research gaps are described as a set of problems with examples of promising approaches, we are actively seeking novel ideas to address the problems and are not prescribing the solution. What should be clear is that the Government is interested in establishing a solid foundation for developing information-based detection systems, discovering and monitoring changes in trends and anomalous behaviors.

Theory of Detection

One of the highest risk investments of the EAGLE program was in the area of developing a theory of detection in non-random networks. Other theoretical investigations included preliminary work on the effects of collaboration and information sharing in terrorist detection processes and the sensitivities of guilt-by-association models to runaway false detections.

The development of a solid theoretical base for terrorist entity and threat detection is desperately needed. The Government would prefer to invest in a multidisciplinary team to continue this research over the course of the Tangram program. The objectives of the research is to first support the System Evaluation Architecture contractor in developing a rigorous and statistically sound evaluation methodology. Secondarily, to support the Data Characterization contractors in defining and validating the data collection metrics that will guide the Tangram system's planning functions. Third, to support the Algorithm Description contractors in validating that the algorithm performance metrics in the descriptions are accurate discriminators of algorithm performance. And finally, to continue to explore the systematic characteristics of the intelligence collection process and our terrorist opponents to identify methods that will assuredly fail and methods that will produce the highest possible detection outcomes.

Deception Detection

The EAGLE program and its predecessor have made impressive progress on inference for intelligence applications characterized by large, ill-defined, high-uncertainty, dynamic networks of individuals, objects and transactions. The results of the research highlight several areas of need in the pursuit of effective analyst-support systems.

The Tangram program makes no distinction between intentional and deliberate acts to avoid detection versus the consequences of spotty collection and reporting of intelligence. From the information analysis perspective of Tangram both instances look the same. The salient features of both are that we are trying to distinguish normal behaviors from seemingly normal behaviors of the observed and the observing system. In large measure, we cannot readily distinguish the absolute scale of normal behaviors for either. What is most readily deduced are changes in their behaviors which will first make them appear anomalous, then suspicious, and perhaps deceptive. While deception detection is our most earnest target, anomalous and suspicious behaviors are a satisfactory second best.

Intelligence applications typically are substantially more difficult than fraud detection, but the utility and usage of fraud detection systems nevertheless is illustrative. For example, fraud detection systems typically search for suspicious patterns of behavior. Unfortunately, individual suspicions rarely provide enough evidence to elicit an enforcement action.

Typically, fraud detection systems construct "cases", incrementally augment the cases with evidence, score and rank the cases by suspicion (dynamically), and provide an interface for analysts to consider the cases as a starting point and a way to collect and organize information.

Several approaches have been identified as potential investment areas:

Suspicion Scoring Under Uncertainty

A key tenet of deception with the intent to camouflage your existence or conceal your intentions is to blend in with the background. Intelligence collection is not a random act nor an indiscriminate vacuum. Rather, it is deliberate, focused and opportunistic when the rare or unusual fragment of evidence is acquired. The combination of intelligence collection, reporting and analysis is very unlike commercial fraud detection methods, which are no less deliberate and focused, but are an inherent part of the electronic transaction fabric of systems. Intelligence collection is far less cooperative or omnipotent. In most instances very little is known about new terrorist entities, implying that the intelligence analysis task has little "known" information to work from. Intuitively, establishing a suspicion score for individual information fragments is well beyond human capabilities. Yet, in such highly uncertain instances, a Tangram-like system may be the only method by which seemingly meaningless data becomes meaningful. The objective is to find the most important "known unknowns".

Recent research results suggest that collective inferencing techniques may provide a plausible approach to filling this gap. This technique is capable of making simultaneous inferences (scores) about large numbers of likely interrelated entities in large data collections and has met with some success.

To date the predominant approaches have used a guilt-by-association model to derive suspicion scores. In cases where we have knowledge of a seed entity in an unknown group we have been very successful at detecting the entire group. However, in the absence of a known seed entity, how do we score a person if nothing is known about their associates? In such an instance guilt-by-association fails.

However, collective inferencing research has demonstrated the potential power of drawing inferences about everyone simultaneously, so scores about an individual and associates are computed simultaneously. Although attractive, collective inference for real intelligence analysis is still a promise rather than a reality. Existing techniques are far too simple. Much more research is necessary to understand its applicability to real intelligence problems and to design appropriate methods to fill this critical gap.

Active Information Acquisition

Tangram embraces the concept of human feedback as a knowledge source. However, Tangram may be capable of providing feedback about existing information gaps in our data collections, which admittedly suffer from spotty collection and reporting. Since Tangram inherently performs suspicion scoring of terrorist entities, it may be capable of calculating expected information value scores of unknown information to improve the certainty of existing entity suspicion scores.

Sometimes called Active Learning, conceptually the approach would be to perform a succession of automated "what if" scenarios that compute the expected value of acquiring additional information. The information queries with the highest expected value would return prospective intelligence information requirements for analysts to consider as future lines of inquiry.

Behavior Profiling in Dynamic Worlds

Existing tools and techniques employ the concept of behavior-based suspicion scoring of terrorist entities and targets. The underlying assumption of existing approaches is that behaviors are a constant that can be described as a graph. Yet, behaviors are not constant and a recognized gap in current terrorist detection processes is the difficulty of handling the dynamic characteristics of behaviors.

How can we profile dynamic behavior well enough to be able to identify, with more or less confidence, entities who want to remain anonymous? Can we identify entities who have taken over the roles of other entities of interest (e.g., those recently apprehended) simply by using the changes in their behavior? Can we incorporate the techniques commonly used by intelligence analysts with the power of massive collective representations?

Network Dynamics Over Time

Many forms of link data have date codes on the links, and this should be exploited. Groups can add or lose members over time. Evidence from the past often bears on recent interactions. Some early research has been performed and the experiments resolved cases where totally irresolvable groups become perfectly characterized when time was accounted for.

Tip Management

Some terrorist data collections are filled with information fragments that are unvetted as to their meaning or significance, they just are reported. In some applications (e.g., food security and consumer complaint hotlines) data is collected from volunteers that is generally very noisy but can be used to detect sudden trends. New systems, such as the TIPS system through which reports of suspicious activity from transportation workers and dock workers are filed are examples of new information sources where fragmentary tips might be discovered. The Intelligence Community and military intelligence units have hundreds of small data collections like TIPS. It may well be beneficial to perform spatial analyses for "hotspots", or space-time analyses to detect previously unseen trends. The U.S. Government recognizes that we do not have an effective way of exploiting these sources and is asking for novel ideas to fill this gap. We believe that a combination of link discovery and pattern learning tools and other tools used in epidemiology, spatial statistics, dynamic belief networks, and graph theory will be applicable here.

Validated Synthetic Data

The EAGLE program's synthetic data generator, called the Performance Evaluation (PE) Lab, produced by Information Extraction and Transport, Inc. has become a staple of unclassified terrorist knowledge discovery research activities throughout the U.S. Government. As beneficial as it is, it lacks the most essential credential of any simulation system - validation. The Tangram program would like to fill this gap so that every element of the Intelligence Community could employ uncleared researchers to produce verifiably accurate and trustworthy algorithms and tools to defeat terrorism.

The existing PE Lab is capable of creating a variety of social networks that are consistent with existing social network theory of large populations. However, the data sets it produces do not reflect the social networks that existing intelligence data sources portray, which look more like a patchwork of holes.

Presuming the Data Characterization research task is successful; a new synthetic data generator will be required to produce unclassified data sets with the known characteristics of classified data sources.

Moreover, by generating validated synthetic data sets the Tangram program will have the ability to test and catalog new and existing algorithms in an unclassified environment; the consequence being faster delivery of proven detection methods to operational environments.

PROTOTYPE EVALUATIONS

Prototype evaluations by the SEA contractor will be conducted by presenting varied combinations of the four key variables: 1) queries/questions, 2) data, 3) algorithms, and 4) hardware to a continuously operating prototype system. The data will be inserted via any previously agreed upon structured form (e.g., spreadsheet, database, xml file, etc.). The algorithms will be inserted in a predefined form (i.e., executable code, processor specifications, algorithm description, heuristic performance results, etc.). The hardware will be inserted using a predefined hardware description method.

The prototype will be evaluated by varying the key variables of the system and comparing the precision of the results with ground truth, comparing work flow used to work flow prescribed by human experts, measuring the cost of producing the answer and the human scored intelligence value of the answer, productivity (intelligence value produced per unit time), and efficiency (intelligence value produced per unit cost).

Metrics may include adaptability (the ability of the system to dynamically reorganize workflow based on incremental changes to data and algorithms), controllability (the ability to guide and correct system operation when human intervention is required), and recoverability (the ability to provide high availability and continuous operational recovery mechanisms).

Offerors should be cognizant of the issues that arise from an increase in complexity while estimating proposed capabilities. The following descriptions are provided as a sample of the required analysis that offerors may need to conduct before constructing their proposals. In all cases, the offeror should not assume the queries can be answered by the available data, algorithms or hardware. Additionally, offerors should assume that some data sources are streaming sources that cannot be held entirely in memory.



Level 1 - Contractor defines the training questions (queries) that can be answered and the data to be used. The Government collaborates with the contractor to select the realistic algorithms and hardware.

Level 2 - Government establishes the queries to be answered using realistic data sets, algorithms and hardware.

Level 3 - Surprise data sets may require the system to perform data integration, data enrichment, or data transformations.

Level 4 thru 6- Multiple and previously unseen surprise variations will likely increase the cost of workflow planning and query response. Intelligence value vs. cost tradeoff functionality may be required to perform computations in a reasonable amount of time.

Level 7 - Intelligence value vs. cost tradeoffs significantly increase the complexity as surprises through human feedback of prior hypotheses necessitate re-computation of prior hypotheses fragments, pre-staged data sets etc.

Level 8 - Human feedback may negate earlier hypotheses generated in a prior workflow, requiring newly composed hypotheses to be derived from prior hypotheses, new data and human annotations.

Level 9 - The large numbers of concurrent analyst queries and feedback dramatically increase computational load for workflow planning and cross workflow sharing of intermediate results. Computation and maintenance of intelligence value estimation and validation results becomes critical.

COMPONENT AND GAP FILLING EVALUATIONS

Component evaluations will be conducted on hardware at the SEA contractor's facility installation. Component evaluations will verify that the each of the components meets the requirements of the prototype developers design goals for inserting new data, algorithms and hardware. Component evaluations will verify that all components produce and ingest hypotheses in a common hypothesis specification format as needed by the prototype.

Components in the Gap Filling Research area will be evaluated annually using specially generated synthetic data sets.