Data analytics on physiological signals from wearable devices

Andrea Bizzego

wearable devices, physiological signal processing, affective computing, real-life sensing

Wearable devices represent an opportunity to enable acquisition and quantification of physiological signals in real-world indoor or outdoor contexts. However, their use in research should be based on a reproducible analytics process, ensuring that all the critical steps in data collection and processing are managed in a reliable experimental setup. The research investigates the actual value and technical limitations of wearable devices for their use in a research context. In particular, the study aims at defining an approach and solutions to compensate the effects of such technical limits by leveraging on three key aspects: signal processing algorithms, sensing architecture and validation of reproducibility. Signal processing algorithms are defined to deal with lower signal-noise ratio of signals collected through wearable devices and to increase reliability of extracted physiological indicators. A real-life sensing architecture is developed to enable synchronized acquisition from multiple subjects and multiple sensors, including cardiac signals, electrodermal activity and inertial data streams. The signal processing pipeline and the real-life sensing architecture are merged in a unique data analytics framework (Physiolitix). The framework is validated on a fairly wide range of sensors, including medical quality multi-sensor smartwatches and smart textile garments applied in diverse research contexts. In particular, a calibration dataset is developed to compare wearable and clinical devices in an affective computing task. Results show that wearable devices can be employed as a valid substitute for medical quality devices with the help of adequate signal processing and machine learning solutions.




Emotion recognition from wearable physiological signals in a non-lab environment

Stefan Dragicevic

emotion recognition, signal processing, wearable sensors, multimodal

In the last years, wearable sensors have experienced an increase in attention and expectations. There is significant growth in terms of production, adoption, quality and potential improvements. They allow for a non-invasive continuous collection of physiological data from users. Due to their rapid development, wearable sensors will soon be commonly used in everyday life. One of the main problems when switching from precise, medical-quality sensors for research to commercial sensors used in real-life scenarios is the drop in data quality obtained by wearable sensors. A robust multimodal approach needs to be developed in order to overcome this problem and achieve higher performance. The aim of this research is to develop a signal fusion method, with the focus on human emotion recognition in non-lab environment that will be used to assess multimedia content, such as for the affective video analysis and video summarization, and to understand influence of human emotions on their behavior during social interaction.





Aravind Harikumar

LiDAR, Forestry, Tree Segmentation

Airborne Light Detection and Ranging (LIDAR) remote sensing based forest inventory at the individual tree level is a valuable and effective alternative to manual inventory, due to factors such as higher accuracy, easy repeatability of sampling, and economic benefits. However, individual tree detection in multi-storied forests is challenging due to high tree proximity and forest structure complexity issues. In this work, we aim at detecting subdominant trees in a multi-stored forest from high density small foot-print multi-return airborne LiDAR data. Segmentation is performed on the Canopy Height Model (CHM), and three dimensional (3D) data associated with each segment are extracted. The data associated with every segment are separately projected onto a novel 3D space, where crown surface information is effectively represented and subdominant trees are highlighted. A set of carefully engineered ten features is employed to separate subdominant from dominant trees. Preliminary results prove the effectiveness of the proposed method.




Time-frequency reassignment for acoustic signal processing

From speech to singing voice applications

Georgia Tryfou

The various time-frequency representations of acoustic signals share the common objective to describe how the energy or intensity of the signal is changing in time, i.e. the temporal evolution of the spectral content of the signal. Among the most commonly used time-frequency representations, the spectrogram as obtained from the application of the short-time Fourier transform, suffers from certain limitations. In this work, we study the use of a sharpened version of the spectrogram, obtained with the method of time-frequency reassignment, as a means of improving the results of speech and music signal analysis tools such as speech recognition and signing voice melody extraction.