|In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations. We design a transfer learning approach for dialogue act recognition and validate it on two different target languages and domains. We compute dialogue turn embeddings with both a CNN and multi-head self-attention model and show that the best results are obtained by combining all sources of transferred information. We further demonstrate that the proposed methods significantly outperform related cross-lingual DA recognition approaches.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.