21st EANN 2020, 5 -7 June 2020, Greece

Detection of Shocking Images As One-Class Classification Using Convolutional And Siamese Neural Networks

Pavel Gulyaev, Andrey Filchenkov


  Not safe for work content automatic detection is a serious challenge for so-cial media due to overwhelming growth of uploaded images, gifs and videos. This paper focuses on shocking images automatic detection by convolutional neural networks. It was considered that the correct recognition of the shock-ing class is more important than the non-shocking one. Binary classification by a convolutional network that training during operation has been used as a baseline solution. However, this solution has two drawbacks: the network highlights incorrect features of non-shocking images (infinite class) and tends to forget rare subclasses of shocking images, which is unacceptable. To eliminate the first drawback, we approach this problem as a one-class classi-fication with having in mind that a “non-shocking” image can be defined on-ly via contradiction with a shocking one. This method is based on using sparse autoencoders build on top of a pretrained convolutional neural net-work and is not trained during operation. To eliminate the second drawback, we memorized vectors of images that were incorrectly classified during op-eration. A trained siamese network during the prediction is used to search for similar images in the database. In the case of an incorrect prediction by the combined model, vectors of images are added to the database and the sia-mese network is trained on them. This method allows you to minimize the number of errors in rare subclasses identified only during the operation phase of the model.  

*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.