Magnus Westerlund at Work

Home

Introduction

I am currently working at Ericsson Research, located in Kista, Stockholm, Sweden. I am mostly doing standardization work in the area of multi-media. I will here try to clarify a bit more what I do, both as result and how. Or how to learn a thousand three and four letter acronyms.

Lets start with what I am currently involved in. I am working with a few different parts that are all related.
  • I am currently Transport Area Director in IETF.
  • Work on a few IETF specifications
What I have worked with:
  • The packet based streaming service (PSS), which is a third generation partnership project (3GPP) standard for how to bring streaming multimedia to third and second-and-half generation mobile phones. I have contributed quite a lot to the technical specification TS 26.234, but also TS 26.244.
  • The Multi-media Broadcast and Multicast Service (MBMS) where I was involved in writing the service layer specification TS 26.346.
  • Co-chair of the Audio/Visual Transport (AVT) working group (WG) in the Internet Engineering Task Force (IETF) from June 2003 until May 2006.
My work can be summarized as learning things, connecting them together and further evolve them.

For Dummies

If you don't know what streaming multi-media is, this section provides a 3 minute crash course:

Streaming multi-media is combination of multi-media and the deliver method which is defined as being streamed. Multi-media should be rather familiar for most people. However I see it as one or media, like audio, video, and text combined into something that can be consumed by a user to gain some information. The information can either be just for information, or for pleasure. It normally implies that the media combination has moving or timed properties, like TV or movie has.

The streaming part is how this media is delivered. The delivery is normally considered to be streamed when the receiver starts presenting it to the user before all the data has been received. That way the delivery will continue during the playback or presentation of the media. This enables the receiver to not store all media, which is practical for larger amounts of data to limited receivers. It also have the advantage of allowing playback to start earlier than if a complete download would have been needed. It also allows the user to halt delivery before everything is downloaded.

I have basically only worked with how to provide streaming multi-media over IP. IP is the fundamental part of the Internet which allows computers to address and send data to computers all over the world. To enable streaming multimedia over IP you need a couple of components.
  • A server which has multi-media presentations, news clips, movie trailers, what ever you want to watch. The server sends out the data upon request from the receiver.
  • A receiver, which is controlled by a user, that is capable of communicating with the server, and to receive and present the media sent by the server. 
This requires a few fundamental components:
  • A signalling protocol. The signalling protocol allows the controlling party, normally the receiver, to request what and how a certain media shall be delivered. This allows the user to start, stop, and seek in the media. It may also provide such services as synchronization between different media so they can be presented in synch. For streaming over IP one such protocol is Real-Time Streaming Protocol (RTSP)
  • A media transport protocol. The media transport protocol provides the necessary functionality for the receiver to determine when and where this media belongs. That requires functionality to provide timing information, what source and how it is encoded is also necessary. On IP the most common format is Real-time Transport Protocol (RTP).

Work Projects

IETF specifications

At the time of writing this July 2003 I am currently involved in a number of specifications within the IETF. To get a current view of what I am involved you can go here, and type in "westerlund" in the search window and select "internet drafts" as the type. The specifications I am working on are:
  • An update to the Real-time Streaming Protocol (RTSP). This is a control protocol that was defined back in 1997, which know has started seen a lot of use. However it has been identified that RTSP has a number of flaws in its specification. So we have been working for one and half year so far on correcting this. We have made steady, but a bit slow progress. 
  • A specification on how RTSP shall be able to traverse Network Address Translators. NATs are evil boxes that are fairly common now days and which breaks the end-to-end properties of the Internet. NATs are usually are part of the boxes that allows you to connect multiple computers to a single internet connection, like DSL. They may be kind of useful to the usual home user, however they wreck havoc for us protocol developers. 
  • An extension to the Session Description Protocol (SDP) to provide bandwidth information that are not dependent on the underlying transport protocol. SDP is used by RTSP and SIP to provide description of the media streams that are going to be sent. However with today's schizophrenic network using two IP version that have different size of the headers, the necessary bit-rate are different depending on version. To counter this problem I have written this extension. Which is now finished, only awaiting publication as RFC.
  • An RTP payload format for the Video codec H.264. This is a specification describing how to to encapsulate units from the video encoder for transport over the transport protocol RTP. It is quite an advance payload format capable of handling interleaving, and some other special tricks.
  • An RTP payload format for the Speech and Audio codec AMR-WB+. This is as the payload format for H.264 a specification that describes how to encapsulate encoded data frames into an RTP payload. The AMR-WB+ codec is an extended version of the AMR-WB codec which also encode mono or stereo audio at low bit-rates up to 24kbits/s. The AMR-WB speech codec is for encoding speech while maintaing a the voice spectrum up to 7kHz, allowing for much more natural speech, than normal telephony.
The work with specification is to write up the specification itself after one has tried to determined what the best design is. Then a draft of the specification is published, allowing other people to read and comment on them. Hopefully one receives comments that allow on to improve the specification. In some cases there becomes lengthy discussions over email, phone, or in person on what is the best course of action. After a while a specification has reached a certain maturity allowing it to be published.

IETF AVT Working Group Chair

This is a quite new responsibility. I become AVT WG chair mid June 2003. The work as WG chair is to ensure that the group produces the decided on specifications. This should be done in a timely manner, and with sufficiently high quality. The work is also about looking at proposed work and discussing its suitability for further development. This requires me to keep me up to date with all the development in the WG, normally by reading the mailing list and proposed drafts. To provide feedback and help the progress along. This does require of me to write a certain amount of emails, and to participate in some phone conferences. It is also a question of handling people, which I think will result in further development of my skills.

The IETF has three meetings every year, normally two in the US and the third somewhere else in the world. These meetings last for six long days. There is scheduled WG sessions from 9 in the morning until 10 in the evening. However a particular WG normally only meets a couple of hours during the whole week. However if one is involved or interested in a couple of WGs the schedule can become rather full. Add to that the corridor discussions, breakfast, lunch, dinner, and bar discussions, this becomes a very busy week. There is usually not that much time to look around at the location one is at. I try to get some extra days when at interesting places. So far I have been to San Diego (California USA), Minneapolis (Minnesota USA), San Francisco (California USA), Atlanta (Georgia USA), London (UK), Yokohama (Japan), Vienna/Wien (Austria).

3GPP PSS

The 3GPP Packet-based Streaming Service (PSS) is what I would call an umbrella standard. It takes a number of protocols and other specifications to define a integrated service. In some cases it defines its own extensions when public available specifications are lacking. The PSS specifies what mobile phones and possibly other low bit-rate devices shall, should and may implement to be able to communicate interoperable with a server supporting PSS content. The standard contains a number of parts to get the necessary functionality.
  • Set-up and control signalling through RTSP.
  • Capability exchange through use of UAProf
  • Media transport through RTP
  • Layout and time composition through W3C's SMIL
  • Speech encoding through 3GPP AMR and AMR-WB speech codecs.
  • Audio encoding through AAC codec
  • Video encoding through ITU H.263, and MPEG-4 Visual codecs
  • Storage and server file format for the continuos media, based on ISO file format.
If you want to know more about PSS, you can go read the specifications 3GPP TS 26.233 and TS 26.234. The links points to the release 5 versions. We are currently working on Release 6 of the specifications which will contain a number of extended functionalities, like bit-rate adaptation, new video and audio codecs, server file format.

My part in this work is to help determine what we (Ericsson) see as necessary in the updated standard. For parts no other has proposed I write input papers describing the extension. I have also written some of the specification text. However my main part is to evaluate others proposal and determine our stand point, and how feasible technically they are.

How I work

A normal day at the office starts with reading up on the mail. In today's society with technical active people around the world in many countries there is constant work ongoing somewhere in the world. This results in that when I arrive in the office I normally have 100-200 emails to look though. A lot may not be that interesting, but some will require input from me. So depending on what it is this takes from an hour up to the whole day.

Then I try to perform any of the tasks that people expect of me. Evaluate some ones proposal, by reading and commenting on it. Write specifications, proposals, or reports depending on what is necessary. Go to meetings to discuss or keep up to date with work by my colleagues. Sometimes there is need to carry out simulations or emulation of proposals to show that they work, which means that sometimes, I may actually write some lines of codes.

Work always get more hectic the week(s) before a standardization meeting, where proposals need to be written, evaluated, or argued over. Then I go to a few meetings every year. Basically all the IETF meetings which are 3, maybe some 3GPP meeting or ad-hoc meeting every year. I find the rather few trips I do being a very appropriate amount. I am glad I don't participate in more forums as this would mean more travel which I would find using up to much of my private time.

About Me
 - Introduction
- Work
 - CV
Me & books
Games
Science Fiction
Forodrim
Photo



Responsible for this page: Magnus Westerlund (mwesterlund(at)bredband.net), Last edited 2006-04-06

Valid HTML 4.01!