A Real-Time Human Action Recognition System Using Depth and Inertial Sensor Fusion

In this paper. a real-time fusion system for human action recognition has been developed that uses data from two differing modality sensors: vision depth and inertial. The system merges the probability outputs of the features from these two differing modality sensors in real-time via a decisionbased fusion method involving collaborative representation classifiers. The extensive experimental results reported have indicated the effectiveness of the system towards recognizing human actions in real-time compared to the situations when using each sensor individually. In our future work. we plan to examine specific applications of the fusion framework presented in this paper by using depth cameras and wearable inertial sensors that have recently become commercially available including the second generation Kinect depth camera. Texas Instruments time-of-flight depth camera. Google Tango miniaturized depth camera. Samsung Gear. and Apple Watch.

About core

Check Also

France’s Electricity Prices Turn Negative amid Poor Demand

Power prices in France turned negative for hours on Tuesday morning amid tepid demand in …

Leave a Reply

Your email address will not be published. Required fields are marked *