Cross-Subject Cross-Modal Transfer for Generalized Abnormal Gait Pattern Recognition

Introduction

For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are therefore prone to overfitting and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments but noises from such devices may prevent effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate four-dimensional (4D) representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows to disentangle subject-specific from abnormal pattern-specific gait features based on a multi-encoder autoencoder architecture. To validate the proposed methodology, we obtained multi-modal gait data based on a multi-camera motion capture system along with synchronized recordings of Electromyography (EMG) data and 4D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.

Qualitative Results

Visualizations of gait cycles across subjects and patterns

 Normal  Supination  Pronation  Toe-in  Toe-out
subj1
subj2
subj3
subj4

Cross reconstruction results

subjA (s)
subjB (p)
A-s + B-p

Citation

Xiao Gu, Yao Guo, Fani Deligianni, Benny Lo, Guang-Zhong Yang, "Cross-Subject Cross-Modal Transfer for Generalized Abnormal Gait Recognition", IEEE TNNLS, 2020.Bibtex