The key to hacking the brain is knowing what makes it tick. "People tend to remember and forget the same set of images on average," Aditya Khosla, team lead on MemNet and researcher at MIT's CSAIL told Engadget. "Even though we all have different backgrounds and experiences, somehow our brains are wired in a way that we tend to remember a similar set of images."



To understand that visual experience and reaction, the researchers turned to Amazon Mechanical Turk for a crowdsourced experiment. They made about 5,000 online participants look at a set of images and asked them to press a key every time an image looked familiar to them. Based on their responses, the crew assigned a memorability score to each image. "We found that there's a large consistency," says Khosla. "Even though people came from a variety of backgrounds, the memorability of the images was preserved."



The images with the scores were then put through MemNet, which used Convolutional Neural Networks (described in an accompanying paper) to make predictions that were almost as accurate as the memorability scores from a diverse group of humans. According to an MIT report, it performed 30 percent better than existing software and was within a few percentage points of the average scores collected from the online experiment. What makes the predictive tool particularly useful is that it throws up a heatmap along with the score, highlighting bits that are likely to be remembered and those that aren't.

Khosla calls that feature an "instant focus group." The heatmap could potentially change the way advertisers place products in a commercial or it could help educators' present information to students in a more memorable way. "If we can understand what drives our memory and what kind of images stick in our memory, we might be able to modify the way content is presented in education to make it easier to remember," he says. "We would essentially exploit the structure of the brain to make that content sink in a lot better."



In previous research, the team tweaked faces in photos to make them more memorable. But this time they included general images, which can be harder for an algorithm to process. To meet that challenge, the team employed the deep-learning approach to train the network to sift through giant heaps of data and visual cues to find a memorability pattern that imitates the human visual experience. "While deep-learning has propelled much progress in object recognition and scene understanding, predicting human memory has often been viewed as a higher-level cognitive process that computer scientists will never be able to tackle," Aude Oliva, principal research scientist, said in the report. "Well, we can, and we did!"



MemNet could have a significant impact on the way we click selfies (an app that can predict memorability is reportedly underway) or help an advertiser be more efficient, but the algorithm isn't ready for subtle details that are picked up by the human eye just yet. "It can do a reasonable job of predicting which face in a group is more memorable," says Khosla. "But if we want to go into small details, [detecting] different poses or [picking] the right logo for a company, for instance, it won't be terribly informative."

[Image credit: CSAIL]