Programs designed to resemble humans infiltrated Facebook recently and made off with 250 gigabytes of personal information belonging to thousands of the social network's users, researchers said in an academic paper released today.

The eight-week study was designed to evaluate how vulnerable online social networks are to large-scale infiltrations by programs designed to mimic real users, researchers from the University of British Columbia Vancouver said in the paper (PDF), titled "The Socialbot Network: When bots socialize for fame and money."

The 102 "socialbots" researchers released onto the social network included a name and profile picture of a fictitious Facebook user and were capable of posting messages and sending friend requests. They then used these bots to send friend requests to 5,053 randomly selected Facebook users. Each account was limited to sending 25 requests per day to prevent triggering anti-fraud measures. During that initial two-week "bootstrapping" phase, 976 requests, or about 19 percent, were accepted.

During the next six weeks, the bots sent connection requests to 3,517 Facebook friends of users who accepted requests during the first phase. Of those, 2,079 users, or about 59 percent, accepted the second round of requests. The increase was due to what researchers called the "triadic closure principle," which predicts that if two users had a mutual friend in common, they were three times more likely to become connected.

Researchers found that social networks were "highly vulnerable" to a large-scale infiltration, with an 80 percent infiltration rate.

"From the OSN [online social network] side, we show that it is not difficult to fully automate the overall operation of an SbN [socialbot network], including accounts creation, researchers wrote in the paper, which is scheduled to be presented at next month's Annual Computer Security Applications Conference in Orlando, Fla. "From the users' side, we show that most OSN users are not careful enough when accepting connection requests sent by strangers, especially when they have mutual connections."

Networks' defense mechanisms, such as Facebook Immune System, are ineffective in identifying and eliminating fake profiles, researchers found. Only 20 percent of the socialbots were blocked by FIS, and that was only because users flagged the accounts as spam.

Researchers cautioned that the data available to the bots could be used for identity theft.

"As socialbots infiltrate a targeted OSN, they can further harvest private users' data such as email addresses, phone numbers, and other personal data that have monetary value," the researchers wrote. "To an adversary, such data are valuable and can be used for online profiling and large-scale email spam and phishing campaigns."

A Facebook representative initially declined to address the specifics of the report, saying that Facebook would use the research as part of its process of addressing new threats and that the network has defenses in place to prevent theft of user data.

After digging into researchers' findings and methodology, the company said it had come across some noteworthy shortcomings.

"We have numerous systems designed to detect fake accounts and prevent scraping of information. We are constantly updating these systems to improve their effectiveness and address new kinds of attacks. We use credible research as part of that process," a Facebook representative said. "We have serious concerns about the methodology of the research by the University of British Columbia and we will be putting these concerns to them. In addition, as always, we encourage people to only connect with people they actually know and report any suspicious behavior they observe on the site."

Update November 2 at 7:04 a.m. PT: Added in Facebook's latest response on the University of British Columbia research.