|Autoencoders have become increasingly popular in anomaly detection tasks over the years. Nevertheless, it remains a challenge to train autoencoders for anomaly detection tasks properly. A key contributing factor to this problem in many applications is the absence of a clean dataset from which the normal case can be learned. Instead, autoencoders must be trained based on a contaminated dataset containing an unknown amount of anomalies that potentially harm the training process. In this paper, we address this problem by studying the impact of the loss function on the robustness of an autoencoder. It is common practice to train an autoencoder by minimizing a loss function (e.g. squared error loss) under the assumption that all features are equally important to be reconstructed well. We relax this assumption and introduce a new loss function that adapts its robustness to anomalies based on the characteristics of data and on a per feature basis. Experimental results show that an autoencoder can be trained by this loss function robustly even when the training process is subject to many anomalies.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.