Robotic prosthetic devices are the current trend and the future of prosthetic devices. Most of the current robotic prosthetic devices use Electromyography (EMG) signal as input signals for their controllers. EMG or visual feedback is used recently to predict the grasping intention of the user, but no one has fused both information for the prediction. However, the number of grasping types in the prediction is limited and most of the investigations were focused on hand gestures. In this paper, we propose a novel fusion method that can improve the prediction accuracy of the EMG system by adapting the visual information. We propose a weight scheme for selected objects which has higher weights for high usage patterns. The investigation revealed that the ANN outperformed KNN, LDA, NB and DT by achieving the highest mean True Positive Rate (73 ± 7.59%). The proposed feature set (Age, Gender and Handedness) showed that their influence increases the accuracy by 16.05 ± 2.70% for ANN. The proposed fusion method combined Majority Vote results and the weight scheme to improve the prediction accuracy of most suitable grasping patterns for the target object by 23.92 ± 8.80% for the highest usage pattern and 9.30 ± 0.84% for the second-highest usage pattern.
Grasping Pattern, EMG Classification, Fusion Method, Object Detection, Signal Filtering