Parsing Web Pages for Images With Apache NiFi

This could be used to build a web crawler that downloads images. I am downloading awesome images from Pixabay!

URL: https://pixabay.com/en/photos/?image_type=&cat=&min_width=&min_height=&q=data+science&order=popular

I wanted to be able to grab every image from a page, and I have some websites I want to backup my images from. So I added a processor that uses JSoup to do so.

Once you download the NAR from GitHub and deploy it to your /usr/hdf/current/nifi/lib directories and restart Apache NiFi, you will have a new processor. It is ImageProcessor listed version 1.6.0.

You can examine and test the Java source code if you wish.

Here is an example flow of grabbing all the images from a Pixabay URL, then filtering out the empty images. Then we split this into individual image URLs. We pull out that tag and then download those images. If they are not blank or small I route to TensorFlow to run some inception on it. I extract image metadata and then we send it to my production cluster for processing and storing of the image in an object store and the metadata to a Hive table.

Above is a pictorial representation of our routing to filter away small and blank images

This is a pretty basic flow to process. I use my custom Attribute Cleaner to clean up the names and make all the attribute names Apache Avro name compliant.

Some of the useful metadata pulled from the image is given below. See the Height and Width, very useful.

High-Level Processing Flow

































Example Data