One-Size-Fits-All Wireless Video

Dina Katabi
Computer Science and Artificial Intelligence Laboratory

Wireless video is increasingly important, driven by user demand for mobile TV, media sharing, and the broadcast of sporting events, lectures, and promotional clips, in universities, malls, and hotspots. These applications, however, present a significant challenge to conventional wireless design. This is because they require the source to deliver the video stream to multiple receivers which have different channel qualities. For example, different video receivers may support different data rates (i.e., 802.11 bit rates). Further the supported rates may change quickly as the receivers roam around. Today, the video source has to choose between tuning its transmissions to reach only nearby receivers, or reducing all receivers to the video quality supported by the worst potential receiver.

The main underlying reason for these difficulties is that conventional wireless design implicitly assumes the source knows (or can easily measure) the quality of the channel to its receiver, and hence can select the best data rate and video resolution for that channel. Multicast and mobility challenge the conventional design because they present the source with a channel quality that differs across receivers and varies quickly over time. As a result, the source faces conflicting requirements when it tries to select a data rate and a video resolution for its transmissions. Ideally, one would like a scheme that does not require the source to know the channel quality, yet achieves the best performance for any channel quality.

SoftCast, (frame 66 clip right, above and represented by green in plot above), compared to MPEG-4, which is the state of the art in video transmission.

SoftCast, (frame 66 clip right, above and represented by green in plot above), compared to MPEG-4, which is the state of the art in video transmission.

Professor Katabi and her students have developed a novel approach to wireless video that addresses these challenges. In their approach, called SoftCast, a source simply broadcasts its packets without specifying a data rate or a video resolution, but each receiver obtains a video quality commensurate with its channel quality. Receivers with good channels extract a higher information rate from the transmitted signal and hence obtain a better video resolution. Receivers with worse channels extract less information and can watch a lower resolution of the transmitted video. This happens naturally despite receiver mobility and interference, and does not require receiver feedback, data rate adaptation, or varying video resolution.

The key idea in Softcast is that a transmitter encodes video pixels so that transmitted signal samples are linearly related to differences between the values of video pixels. As a result, a receiver with a good channel receives coded samples that are close to the transmitted samples, and hence decodes pixel values that are close to the original values. It thus recovers an image with high fidelity to the original. A receiver with a bad channel, on the other hand, receives coded samples that are further away from the transmitted ones, decodes them to pixel values that are further away from the original pixels, and hence gets a lower fidelity image.

The figure above shows that as the receiver moves away from the sender, MPEG-4’s video quality exhibits a cliff-effect and drops sharply around frame 66, both in MPEG-4 and SoftCast to give a feel of user experience. As shown in the graph [which has been cut in two to fit on the page], the video quality changes as the receiver moves away from the video source. The green line shows SoftCast video quality; the orange line shows MPEG-4 video quality; the blue line shows the channel quality (i.e., its SNR).

Thus, SoftCast provides graceful degradation of the transmitted image for different receivers, depending on the quality of their channel. This is unlike the conventional design, where the transmitter encoding does not preserve the numerical properties of the original video pixels, and hence a small perturbation in the received signal, e.g., a bit flip, can cause an arbitrarily large error in pixel luminance. A demo showing the whole video is available at: http://caterpillar.csail.mit.edu/~szym/flex/ [click on demo]. Read more at: http://dspace.mit.edu/handle/1721.1/44585.

Tags: , , , , , ,

Leave a Reply

Replies which add to the content of this article in the EECS Newsletter are welcome. The Department reserves the right to moderate all comments. If you would like to provide any updated information please send an email to newsletter@eecs.mit.edu.