In order to develop new artificial intelligence technology, Google has compiled roughly 57,000 YouTube clips for its AI machines to watch in order to learn to recognize different human actions.

The three-second clips have been sourced from various content genres and countries of origin, and many were taken from popular films and television shows. All told, they seek to highlight roughly 80 different so-called “atomic visual actions” (AVAs) — including walking, kicking, hugging, and shaking hands, reports the Silicon Valley Business Journal. In a blog post, Google notes that although its AI machines have made breakthroughs in identifying and classifying static objects in YouTube videos, “recognizing human actions still remains a big challenge…due to the fact that actions are, by nature, less well-defined than objects in videos.”

If Google perfects machine learning that can recognize human actions, it would improve YouTube search functions and, therein, content discoverability. You can check out the 57,000 YouTube clips that the company is currently working on right here.

Earlier this year, in the face of the YouTube ‘Adpocalypse’, Google also harnessed developments in AI to identify and remove terrorism-related content “in a scalable way” — as well as to redirect searches for extremist videos to anti-terrorist playlists.