I met Jeanne Holm last week during the W3C Advisory Committee meeting. Jeanne is the Chief Knowledge Architect at the Jet Propulsion Laboratory (JPL), California Institute of Technology, and leads the Knowledge Management Team at NASA. When we started talking about NASA’s use of Semantic Web technology I asked whether the application satisfies three criteria:

The application aggregates data from three independently developed sources. The data is used in ways not originally intended (“serendipitous reuse”). The cost of aggregation is low, requiring only a small amount of connective tissue.

She answered “yes” to all three, a Semantic Web trifecta. I then asked her to share the details, which we captured below.

Jeanne Holm (JH): The goal of our project was to make it easy find expertise within an organization, or, as you’ll see, across organizational boundaries. The project is called POPS for “People, Organizations, Projects, and Skills.” The acronym does not include E for Expert for a good reason: we tried three times to create a system with data specifically about expertise, but failed each time for different social reasons. Each attempt relied on self-generated lists of expertise. In the first attempt, people over- or under-inflated their expertise, sometimes to bolster their resumes. The second attempt prompted labor unions to get overly involved because greater expertise could be tied to higher pay. The third approach involved profiles verified by management, and that led to a number of human resources grievances when there was a disagreement. In all cases, the data became suspect.

JH: On the fourth try we decided to infer expertise and not require manual updates to profile information. We then realized that we already had a lot of valuable data that we could repurpose. We decided to try out Semantic Web technology to unlock our data at NASA.

JH: It turns out at NASA there’s a lot of valuable legacy data, including:

Data about publications dating back to 1921. This high-quality, metadata-rich repository is curated by librarians. It includes about 3.5 million records. The publication data tells you a lot more than simply who is an expert on which topic. For instance, if two people co-authored a publication, and they work for different organizations, suddenly we have knowledge about potential expertise in other organizations. There are also people who have retired and thus don’t appear in our current database of employees, but whom we might consult for their expertise.

The employee directory. This provides information about physical locations (helpful when you need to call on someone nearby) as well as more connections to other people in the same project or lab.

Time-keeping data, that is: how much time people have spent on various projects. This can be a useful metric for expertise and provide connections with other people from the same project.

Human Resources (HR) data. Some of this data is more subjective than others, but there is still lots of useful information about training programs and other ways to validate experience.

JH: Note that we don’t own or maintain any of the data, nor did we make any effort to work with people in advance on a standard for exchange.

IJ: And you were able to infer expertise by mashing up all this data?

JH: Yes. But even more importantly, in the previous attempts, when we required people to manage expertise information explicitly, that information proved to be inaccurate. Bad results quickly undermined people’s confidence in the system. Finding someone incompetent is much worse than having a trustworthy system return zero records for a given query.

IJ: How did you manage privacy issues?

JH: Obviously with human resources data, but also with legacy data generally, there can be sensitivities about sharing, as well as a desire to maintain privacy. In the case of the HR data, we only had access to a portion of the data.

JH: In the real world people feel strong ownership of their data, and they hug it, and they won’t let it go. The reasons may have to do with job security. Or that the data is not architected well-enough to be shared. And the cost of cleaning it up is considered greater than the benefit of reuse. So very few database administrators (DBA) willingly make data available.

IJ: How did you pry it from them?

JH: Through service agreements! These agreements served two purposes:

We documented the fields we were using and the owner committed to notify us if they changed their data structure. We documented how we intended to use the data, which made people much more comfortable.

JH: By the way, after we completed our project, it turns out that other parties were able to reuse the same unlocked data. A sort of micro-economy of data richness grew within NASA as soon as the environment was seeded with data. You asked about serendipitous reuse: that’s exactly what we saw happen.

IJ: Tell me about your use of specific technologies.

JH: Kendall Clark at Clark and Parsia and Andy Schain at NASA HQ were the proponents for bringing in semantic technologies and they did the technical heavy lifting to make POPS happen (see the case study). We used RDF to aggregate the data and SPARQL for queries, constructed with a friendly user interface. Here’s a sample query: “I am looking for someone at the Jet Propulsion Lab who has done thermal engineering who worked on the Galileo spacecraft.” In other words: I don’t just need a thermal engineer, I need someone who knows an existing system that I’ve inherited in my current project. Knowing that someone worked on a particular project is extremely important.

IJ: Do you have information about cost savings by any change?

JH: Yes. First, we saved time and money by not requiring people to manage profile information explicitly (and remember, they weren’t doing a good job of it anyway). We estimated that people spent about four hours a year to create and manage their expertise profile. If 140,000 people do that, the approximate cost of profile management is $38 million annually, excluding system maintenance costs. So our use of Semantic Web technology to infer information that was also more reliable, saved us $38M in sunk costs.

JH: We made a small investment up front in technology maturation. It is imperative that our systems be robust, so we spent some time looking at different solutions, including some technologies that may not have been ready for prime time. The government can be shy that way! The total cost for the first deployment was somewhere between $250-300K. But now the system costs me $20K a year to maintain. Some of our systems can cost millions of dollars to maintain, especially if we have to deal with data generation, quality, and provenance.

IJ: Can you say a word about the “linked data” aspect of the project? What do you learn through the aggregation?

JH: The system presents a Web of related information, and the social relationships in particular are extremely valuable. For example, suppose I’m looking for a geothermal engineer. My first query produces ten candidates. I probably already know some of them, so I start looking at names I don’t know. How do I learn whether these people have the necessary skills and experience? That’s when the social graph that emerges from the aggregated data comes in, such as “you have a colleague in common.” It may be enough to know that we share a friend, or, I can pick up the phone and call my friend to learn more. Even if I find zero candidates for my query, I can enlarge the scope of the query, or search for “someone like this person” or “someone who worked in the same project as someone who has the qualities I need.”

JH: One interesting part of people trusting our system was that we tell them about the sources of data. We spent several hundred thousand dollars looking at technology options, data sources, data quality, and how to build a system that would instill confidence. We found that people’s trust in the aggregation was inseparable from their trust in the individual pieces. Of course, their trust is strengthened by the usefulness and accuracy of the results.

IJ: What will you do next with POPS?

JH: The US Army deployed the system using the open code, and we are helping the Air Force and Office of Naval Research deploy it as well. We recently met with all of them to discuss what has worked well. Not only will the systems be used by each organization, we will be able to make it work across organizational boundaries. This will allow someone in the Army to find someone with relevant expertise who happens to be working at NASA.

IJ: Does everybody have to agree to the same vocabulary?

JH: No. In fact, each organization will be using different types of data for inferring expertise. As a result, there are some usability choices (e.g., what columns you choose to display since the data sources are different).

IJ: Since the data are different, how do you make sense of the results? Do you have to map terms? Weight data sources?

JH: Each organization will do weighting internally because each query result is “valid” within its own context. We don’t expect to do weighting between the systems. The organizations don’t have to change their internal representation of data, but we do think it will be worthwhile to create mappings. Moreover, the customers think it will be useful. But even without mappings, the system would still be useful.

IJ: Do you see the costs decreasing with each deployment?

JH: Yes, they do go down. The Army has set up the system by themselves; they had good enough documentation to do so. The hardest part of setting it up is finding the data sources. Once you find the right data sources, you are a month away from a working system. The third and fourth deployments were about 1/3 the cost of the first deployment. Later I’ll let you know about the cost of aggregating the four systems.

IJ: Thank you, Jeanne, I will look forward to that!