In this paper. a real-time fusion system for human action recognition has been developed that uses data from two differing modality sensors: vision depth and inertial. The system merges the probability outputs of the features from these two differing modality sensors in real-time via a decisionbased fusion method involving collaborative representation classifiers. The extensive experimental results reported have indicated the effectiveness of the system towards recognizing human actions in real-time compared to the situations when using each sensor individually. In our future work. we plan to examine specific applications of the fusion framework presented in this paper by using depth cameras and wearable inertial sensors that have recently become commercially available including the second generation Kinect depth camera. Texas Instruments time-of-flight depth camera. Google Tango miniaturized depth camera. Samsung Gear. and Apple Watch.
Check Also
Southeast Asia will be a Major Energy Power for the next Decade
Southeast Asia’s role in the global energy system is set to grow strongly over the …