|Radar sensors, unlike lidars or cameras, can measure objects' instantaneous velocity and composition. However, the ability to process and retrieve spatial information from radar point clouds has not yet been fully established. In this work, we propose a key technique for improving the performance of standard machine learning point-wise processing methods on radar point clouds. We show that a network can learn to extract object-related signatures for every point using automotive radar measurements. % like Doppler and RCS. In addition, we propose RadarPCNN, a novel architecture for performing semantic segmentation, specifically designed for radar point clouds. RadarPCNN uses PointNet$++$, aided with mean shift as feature extractor module and an attention mechanism to fuse information from different neighborhood levels. We show that our model outperforms state-of-the-art solutions on our dataset.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.