Google has released its updated open-source image dataset Open Image V5 and announced the second Open Images Challenge for this autumn’s 2019 International Conference on Computer Vision (ICCV 2019).

First introduced in 2016, Open Image is a collaborative release comprising about nine million images annotated with labels covering thousands of object categories. The new version is an update on 2018’s Open Images V4.

Open Image V5 features newly added annotations on image segmentation masks for 2.8 million objects in 350 categories. Unlike bounding-boxes that only identify the general area in which an object is located, these image segmentation masks trace the outline of the target object, characterizing its spatial extent with a higher level of detail.

Example masks on the Open Images V5 training set

The segmentation masks on the training set of 2.68 million data samples were generated by Google’s interactive segmentation process. Professional human annotators continue to participate in improving the segmented neural network’s output. Google says the method gives masks an average accuracy of 84 percent, which is more efficient than manual drawing alone.

Example masks on the validation and test sets of Open Images V5, drawn completely manually.

In addition to the masks, Google added 6.4 million manually verified image-level tags to bring the total to 36.5 million tags covering nearly 20,000 categories. Google researchers also improved the annotation density of 600 object analogs in the validation and test sets, adding more than 400,000 bounding boxes to match the annotation density in the training set and ensure a more accurate assessment of the target detection model.

The ICCV 2019 Open Images Challenge will introduce a new instance segmentation track based on the Open Images V5 dataset. Also added this year are a large-scale object detection track covering 500 categories with 12.2 million training bounding-boxes; and a visual relationship detection track to detect object pairs in a particular relationship.

The training set with all annotations is available for download.