1. Home >
  2. Apps >
  3. Groups >

New Technique To Cut Bandwidth Usage By 98% During Mobile Image Processing

Question asked by Ankita Katdare in #Coffee Room on Nov 14, 2015
Ankita Katdare
Ankita Katdare · Nov 14, 2015
Rank A1 - PRO
A team of researchers from MIT, Adobe Systems and Stanford University has developed a new system that reduces bandwidth consumption by server-based image processing by 98% and as much as 85% in power consumed. With smartphone use increasing by leaps year after year, users are hoping to do more on their mobile device. An immediate application has been capturing photographs and while there has been a lot of innovation in the way they are captured, the image processing apps needed much working on. Traditionally, most mobile image processing applications suffer from very intensive computing and are heavy on battery power draining. The solutions proposed so far involved sending image files to a central server, however with larger files the cost of data usage is always pretty high and delays incurred are quite significant.

The new system proposed by MIT-Stanford researchers and engineers from Adobe involves sending a highly compressed image file and the server would send an even smaller file that has simple instructions for modifying the original image.

Their method works by altering the image's style - a technique similar to adding 'filters' a popular feature on image editing apps such as Instagram. It may not be effective with large content-based changes such as deleting a certain figure or changing background with a new fill colour.

To reduce the load on bandwidth usage, the system sends a low-quality JPEG file to the server and the real magic happens at the server side where the image is processed. The system introduces high frequency noise to the image so that its resolution becomes considerably higher, an effect that prevents the system from relying too much on the consistency of colors in particular section of the image.

Next the system manipulates the image for better contrast, colour spectrum shifting, sharpening edges etc. Once that's done and dealt with, the image is broken down in smaller chunks using a certain machine learning algorithm to characterise using 25 basic parameters.

When the image is finally sent back to the mobile device from the server, the system locally performs the modifications on the high-resolution copy of the image.

In their experiments, the researchers found that they could have time savings of about 50-70 percent and power savings of up to 50-85 percent. They are now working on optimising their technique so that its suits every single mobile platform.

What are your thoughts about mobile image processing? Share with us in comments below.

Source: MIT Posted in: #Coffee Room

You must log-in or sign-up to reply to this post.

Click to Log-In or Sign-Up