|Human activity recognition is a challenging field that grabbed considerable re-search attention in the last decade. Two types of models can be used for such predictions, those which use visual data and those which use data from inertial sensors. To improve the classification algorithms in the sensor category, a new dataset has been created, targeting more realistic activities, during which the us-er may be more prompt to receive and act upon a recommendation. Contrary to previous similar datasets, which were collected with the device in the user’s pockets or strapped to their waist, the introduced dataset presents activities dur-ing which the user is looking on the screen, and thus most likely interacts with the device. The dataset from an initial sample of 31 participants was gathered using a mobile application that prompted users to do 10 different activities fol-lowing specific guidelines. Finally, towards evaluating the resulting data, a brief classification benchmarking was performed with two other datasets (i.e., WISDM and Actitracker datasets) by employing a Convolutional Neural Net-work model. The results acquired demonstrate a promising performance of the model tested, as well as a high quality of the dataset created, which is available online on Zenodo.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.