A Real-Time Human Action Recognition System Using Depth and Inertial Sensor Fusion

In this paper. a real-time fusion system for human action recognition has been developed that uses data from two differing modality sensors: vision depth and inertial. The system merges the probability outputs of the features from these two differing modality sensors in real-time via a decisionbased fusion method involving collaborative representation classifiers. The extensive experimental results reported have indicated the effectiveness of the system towards recognizing human actions in real-time compared to the situations when using each sensor individually. In our future work. we plan to examine specific applications of the fusion framework presented in this paper by using depth cameras and wearable inertial sensors that have recently become commercially available including the second generation Kinect depth camera. Texas Instruments time-of-flight depth camera. Google Tango miniaturized depth camera. Samsung Gear. and Apple Watch.

About core

Check Also

Renewables & Nuclear Power to Drive Global Electricity Growth by 2026

All of the world’s additional electricity demand over the next three years is expected to …

Leave a Reply

Your email address will not be published. Required fields are marked *