Debasmita Banerjee
Debasmita Banerjee
Instrumentation
08 Oct 2017

Now mouth gestures can help one interact with VR systems,study from Binghamton University

Do you sleep for long hours on weekends? I do. It’s also the best time for me to ‘daydream’. However, given that our future is in the hands of those tech giants with sharp brain and bones our entire reality could just be an extension of our imagination, dreams, and hallucinations. History knows, how time and its facilitators converted theater experience to a scientific exploration.

Thus the journey began, opening a new environment with immersion technology that helps you forget your surroundings for a moment and places you in an entirely different world with 3D attire and senses you would have never experienced before. Now, seemingly dreamy technology has a lot of jargon inside that needs to be perfectly calibrated for standard experiences. While evolution is inevitable, scientists and engineers are on the hunt for a better replacement of its standard module. CE’s Sunday Special is all about the latest from the Binghamton University in collaboration with the state university of New York, where a team has found mouth gesture interaction to be a more flexible way of keeping with VR environment.

Mouth_gesture_VR
Professor Lijun Yin and team leader of the new found mouth gesture based VR system

The group, as published by third-party sources, have innovated a brand new way to interact with VR environment only with users mouth gestures. In virtual reality, the elements must be completely in sync to give you this experience. It directly hints that immersion is the key to making the entire virtual world intact. Having many variants, each type VR environment has its own immersion level. The published work suggests that the growth of head-mounted displays at present is proportional to its success in giving realistic immersive visual experiences.

However one of the practical obstructions they face is that the upper portion of the users’ head doesn’t get recognized due to which, the entire face actions become unrecognizable. What they miss? A significant movement or action scopes.
To fix the issue, Professor of Computer Science Lijun Yin from Binghamton University and his team created a new framework that successfully detects mouth gestures as a medium for interaction inside the real-time VR environment.

The application was further beta tested by a group of graduates. Having a head-mounted display (HMD) on, the students had to play a very simple game in which they need to guide an avatar in a forest and eat as many cakes as they can. The guided movement was facilitated by direction using head rotation, and some were performed using mouth gestures whereas they could only eat cakes by smiling. The system was successful with much less error and later validated through a real-time virtual reality application.

Yin mentioned that they have plans to release it in the market with features simultaneously accessible to two users. He envisioned a Skype call where both the person could be surrounded by the same geometrical shape and with perfect prediction, the person sitting on the opposite side might see your generated expressions. He also underscored the importance of VR in other significant fields such as medicine and military. A better immersive experience can allow them to perfect their respective duties.

Among students, Umur Aybars Ciftci and Xing Zhang helped him in his research. The complete research has been published in the 2017 IEEE International Conference on Multimedia and Expo.

Source: IEEE
04 Jun 2018

Update: SPAM Snipped!

Share this content on your social channels -

Only logged in users can reply.