Emotion identification via speech recognition algorithms

Researchers have already been able to develop algorithms to identify the emotions from a persons's voice. The systems can accurately detect 'anger', 'sadness', 'excitement' and similar emotions based on 'how' a person says something (than what is said). So you can say "I'm sad" with excitement and the algorithm will detect the excitement automatically.

I'm curious to know if the frequencies trapped by the algorithm are enough to determine emotions or there are other components to it as well? That is, is it all about the 'pattern matching' or the whole process is more complex?

PS: Read our VoiCE coverage: #-Link-Snipped-#

Replies

  • Whats In Name
    Whats In Name
    I think as only frequencies can be fed up as input for speech detection, no other components are involved, however we cannot call it a simple process but complexity of frequencies, it may be detecting more features of speech but all calculations and pattern matching must be done on frequencies.

You are reading an archived discussion.

Related Posts

Malaysian anti-corruption commission (MACC) has declared that using 'Facebook' at work is 'corruption' because you shouldn't be surfing social media or doing 'personal networking' during your office hours. The commission...
Lego's just unveiled the latest in the mindstorms series - called the Lego Mindstorms EV3. It's got a better firmware, new IR sensor and tighter integration with Android and iOS...
Apple's began testing the next generation of their mobile operating system, iOS7. Developers have spotted the new device in their server logs. It's expecte that Apple will launch the latest...
Practo.com is a cloud based instant confirmed doctor appointment booking system. The online service helps people find the right doctor in their locality by browsing through thousands of listed cum...
Please I want a research proposal report regarding mechanical engineering and some possible topics.