Emotion identification via speech recognition algorithms
Researchers have already been able to develop algorithms to identify the emotions from a persons's voice. The systems can accurately detect 'anger', 'sadness', 'excitement' and similar emotions based on 'how' a person says something (than what is said). So you can say "I'm sad" with excitement and the algorithm will detect the excitement automatically.
I'm curious to know if the frequencies trapped by the algorithm are enough to determine emotions or there are other components to it as well? That is, is it all about the 'pattern matching' or the whole process is more complex?
PS: Read our VoiCE coverage: #-Link-Snipped-#
I'm curious to know if the frequencies trapped by the algorithm are enough to determine emotions or there are other components to it as well? That is, is it all about the 'pattern matching' or the whole process is more complex?
PS: Read our VoiCE coverage: #-Link-Snipped-#
0