academic streaming in europe: report on tf-netcast

11
ACADEMIC STREAMING IN EUROPE: REPORT ON TF-NETCAST Alessandro Falaschi * , Dan Mønster , Ivan Doleˇ zal , Michal Krsek * Info-Com Dpt., University of Roma La Sapienza, Via Eudossiana 18, 00184 Roma, Italy UNIC, Olof Palmes All´ e 38, DK-8200 ˚ Arhus N, Denmark CESNET, z.s.p.o., Zikova 4, 160 00 Praha 6, The Czech Republic e-mail: * [email protected], [email protected], [email protected], [email protected] Abstract The TF- NETCAST task force was active from March 2003 to March 2004, and during this time the mem- bers worked on various aspects of streaming media related to the ultimate goal of setting up common services and infrastructures to enable netcasting of high quality content to the academic community in Europe. We report on a survey of the use of streaming media in the academic community in Europe, an open source content delivery network, and a portal for announcing live streaming events to the global academic community. 1 Introduction TF- NETCAST is a TERENA task force focusing on streaming media with an emphasis on services for the European academic sector, such as a portal for announcing live streaming events, video-on-demand portals, metadata for portals, and content delivery networking infrastructure. TF- NETCAST also made a survey of the use of streaming media in higher education, and collected information about resources for content production. TF- NETCAST has worked on some of the necessary components for setting up a European Academic Netcasting service such as the live streaming announcement portal and the live streaming infrastructure. These components have not yet been tied together, since they were both under development and testing during the time in which TF- NETCAST was active. All the deliverables and more information about TF- NETCAST is available at the TF- NETCAST web page on TERENA’s web site [1]. In this paper we report on results of the streaming media survey, the live streaming announcement portal and the live streaming infrastructure also known as Open Content Delivery Network, or Open CDN. 2 Streaming media survey This section summarises the findings of a survey conducted by TF- NETCAST in the spring of 2003. A more detailed account of the results of this survey is available in TF- NETCAST Deliverable B: Report on Streaming Video Survey [2]. The survey was targeted towards people that regularly deal with streaming video: content creators, video producers, ICT staff and project leaders; in short people who work with one or more aspects of streaming video as part of their job. 2.1 Questionnaire The questionnaire was implemented as a series of web pages, where each page contained a set of related questions, and the answers were stored in a database for later retrieval and processing. The questionnaire A. Falaschi, D. Monster, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Upload: others

Post on 11-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Academic streaming in Europe: Report on TF-Netcast

ACADEMIC STREAMING IN EUROPE: REPORT ONTF-NETCAST

Alessandro Falaschi∗, Dan Mønster†, Ivan Dolezal?, Michal Krsek ‡

∗Info-Com Dpt., University of Roma La Sapienza,Via Eudossiana 18, 00184 Roma, Italy

†UNI•C, Olof Palmes Alle 38, DK-8200Arhus N, Denmark?‡CESNET, z.s.p.o., Zikova 4, 160 00 Praha 6, The Czech Republic

e-mail: ∗[email protected],†[email protected],[email protected],‡[email protected]

Abstract

TheTF-NETCAST task force was active from March 2003 to March 2004, and during this time the mem-bers worked on various aspects of streaming media related to the ultimate goal of setting up commonservices and infrastructures to enable netcasting of high quality content to the academic community inEurope. We report on a survey of the use of streaming media in the academic community in Europe, anopen source content delivery network, and a portal for announcing live streaming events to the globalacademic community.

1 Introduction

TF-NETCAST is a TERENA task force focusing on streaming media with an emphasis on services forthe European academic sector, such as a portal for announcing live streaming events, video-on-demandportals, metadata for portals, and content delivery networking infrastructure.TF-NETCAST also madea survey of the use of streaming media in higher education, and collected information about resourcesfor content production.TF-NETCAST has worked on some of the necessary components for setting upa European Academic Netcasting service such as the live streaming announcement portal and the livestreaming infrastructure. These components have not yet been tied together, since they were both underdevelopment and testing during the time in whichTF-NETCASTwas active. All the deliverables and moreinformation aboutTF-NETCAST is available at theTF-NETCAST web page on TERENA’s web site [1].

In this paper we report on results of the streaming media survey, the live streaming announcementportal and the live streaming infrastructure also known as Open Content Delivery Network, or OpenCDN.

2 Streaming media survey

This section summarises the findings of a survey conducted byTF-NETCAST in the spring of 2003. Amore detailed account of the results of this survey is available inTF-NETCAST Deliverable B: Report onStreaming Video Survey [2].

The survey was targeted towards people that regularly deal with streaming video: content creators,video producers, ICT staff and project leaders; in short people who work with one or more aspects ofstreaming video as part of their job.

2.1 Questionnaire

The questionnaire was implemented as a series of web pages, where each page contained a set of relatedquestions, and the answers were stored in a database for later retrieval and processing. The questionnaire

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 2: Academic streaming in Europe: Report on TF-Netcast

Croati

a

Czech

Rep

ublic

Denmar

k

Finlan

d

German

y

Greec

e

Irelan

dIta

lyLa

tvia

Nether

lands

Norway

Poland

Portug

al

Spain

Switzer

land

United

King

dom

United

Stat

es of

Amer

ica

Num

ber o

f res

pond

ents

0

2

4

6

8

10

12

14

16

18

20

Figure 1: Distribution of respondents by country

contained a total of 112 questions, organised in 49 pages. Each respondent would see a subset of thesequestions, depending on the answers to certain key questions. For example, respondents would only beasked questions about encoding tools if they answered yes to the question“does your organisation encodematerial for streaming?”

The questions were organised into the following subjects:

1. Content for streaming

2. Streaming portal

3. Camera and production equipment

4. Streaming servers

5. Media players

6. Network

7. Metadata

8. Future plans

In this paper, we will only present selected results and refer the interested reader to the full report [2]for the details left out here.

The questionnaire was distributed as widely as possible within the constituencies of the NRENs rep-resented inTF-NETCAST and also on several mailing lists. As a result, the questionnaire was answeredby 77 respondents from seventeen different countries (see Fig. 1).

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 3: Academic streaming in Europe: Report on TF-Netcast

Most of the respondents came from universities (70%) followed by National Research and EducationalNetworks (NRENs) (16%), while the remaining 14% came from other types of institutions.

As many as 88% of the organisations represented in the survey streamed content themselves, whilethe remaining 12% relied on others to provide streaming servers. Of those who streamed content, 81%said they also streamed live content.

2.2 Streaming portal

Only 40% of the respondents say their organisation has a streaming portal or streaming announcementportal and 74% of those also have a video archive. Thus, there should be an interest in using theTF-NETCAST announcement portal.

The respondents provided links to their own portals and archives, and the full list of portals andarchives can be found in Deliverable B. Looking at the different portals, it is clear that there is a largevariation in scope, design and functionality of the different portals. It also shows that there is a need fora common portal – at least for high profile events that are of interest also outside a narrow community.In the category of streaming announcement portal, a total of twenty-four links were submitted. It wouldbe much easier to consult just one portal than to cycle through even an abbreviated list of more specificportals.

2.3 Streaming servers

The section about streaming servers in the questionnaire revealed which formats, server software andserver platforms are in use and to what extent. 62 respondents (81%) said their organisation operates oneor more streaming servers. Windows Media server was used by 58% of those respondents, and one or bothof either QuickTime Streaming Server or Darwin Streaming Server was used by 56%, while RealServerwas used by 52%. Several other types of streaming servers were reported used, but of these, only twowere used by more than two respondents,viz., Kasenna MediaBase used by six respondents (10%), andCisco IP/TV used by four respondents (6.5%).

Not surprisingly, there is a correlation between the servers used and the file formats in use (see Fig. 2).Windows Media files are used by 71%, RealMedia by 66% and QuickTime by 42%. The standards-basedMPEG family of CODECs and file formats are also quite popular, especially MPEG-4 which was used by55% versus 34% for MPEG-2 and 32% for MPEG-1. The questions about the formats used were askedto all respondents, not only those with streaming servers, so the differences in numbers between serversand corresponding servers are not inconsistent.

The respondents whose organisations run their own streaming servers were also asked whether theyhave deployed a content delivery network or replicate content between servers. Eight respondents an-swered yes to this question, and of those eight, all but one said they were interested in joining a largercontent delivery network, but only four were willing to let their own content delivery network be usedfor distributing events from other organisations. Thus, we may conclude thatthere is a big interest inparticipating in a larger content delivery network. Unfortunately the questionnaire was designed in sucha manner that only those who already have aCDN were asked whether they would be interested in joininga largerCDN. One might expect that precisely those who do not have one, but are planning to deploy onewould be the ones most interested in participating in an already-existingCDN.

Only seventeen percent of the respondents have ever reached the maximum of their concurrent streamlicense, so there should be ample server power distributed among the respondents to the questionnaire tobe used in a commonCDN.

2.4 Media players

The questionnaire contained questions about which players and platforms (operating systems) were used,supported and preferred by the respondents and their organisations. Fifty respondents said their organ-isation supports and/or recommends specific player software. When asked what media player softwarewas used on computers in their organisation, 39 respondents said they used Windows Media Player, 32respondents said they used RealPlayer, and 28 respondents said they used QuickTime Player. Other typesof player software were also used or recommended, but only by a few of the respondents’ organisations.

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 4: Academic streaming in Europe: Report on TF-Netcast

MPEG-1

MPEG-2

MPEG-4

RealM

edia

Wind

ows M

edia

Quickti

meOthe

r

Num

ber o

f res

pond

ents

0

10

20

30

40

50

Figure 2: The types of streaming formats used.

Windows Media Player 42%

RealPlayer 28%

QuickTime Player 16%

Other 14%

Figure 3: Answers to the question “What is your preferred/favourite player?”

The respondents were also asked which player they preferred. The distribution of answers to thisquestion is shown in Fig. 3. Windows Media Player is clearly the most popular player preferred by 42%of the respondents. RealPlayer comes in second with 28% of the votes and QuickTime Player is thirdwith 16%. It is somewhat surprising that as many as 14% have listed ‘other’ as their response to thisquestion. This did not indicate that a single other player was popular among the respondents, but ratherthat a number of other players were preferred by those respondents. This included popular players for theLinux platform such as Mplayer, VLC and XMMS.

2.5 Metadata

All respondents were asked whether their video assets and live streams are described by metadata. Thirtypercent said their video assets are described by metadata and 21% said that their live streams are. Therespondents who answered yes to at least one of these questions were asked further questions about theiruse of metadata. Some used non-standard metadata, but many of the models were based on variousstandards such as Dublin Core, Qualified Dublin Core, IMS Learning Object Metadata and MPEG-7.Dublin Core, Qualified Dublin Core and IMS were the most popular.

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 5: Academic streaming in Europe: Report on TF-Netcast

The different metadata models and various strategies for the exchange of metadata are discussed indetail inTF-NETCAST deliverable G: Report on Video-on-Demand Metadata and Portals [3].

3 Live streaming infrastructure

This section illustrates the development of a vendor-agnostic architecture for scalable delivery of livestreaming content to a large audience. The resulting overlay network has been named theOpen ContentDelivery Network, or OpenCDN, in short, and as such, it will be indicated in the following.

3.1 Background

Distribution of Live Streaming content to a very large number of clients cannot employ a traditionalclient-server architecture, as it would quickly encounter processing and bandwidth limitations. A classicalsolution to this problem has been indicated in the use of multicast distribution of media; unfortunately,multicast routing is not ubiquitously deployed, and although well supported in backbone routers, it is notin the periphery. Moreover, major ISPs do not establish multicast peering agreements in their exchangepoints, preventing to their subscribers multicast content reception.

A viable solution to scalable live streaming is the deployment of an application-level distribution tree,building an overlay network whose nodes are application-level agents, which act as streaming clients onone side, and as streaming servers on the other, and split the incoming stream for each of the connectedclients. As this architecture resembles that of a distributed web cache, the termsurrogate, or relay, is usedinstead.

Deployment of a set of relays along the network, allows implementation of anapplication level multi-cast(ALM ) Content Delivery Network(CDN). Many vendors in the streaming market arena [4, 5, 7, 6, 8]offer CDN architecture solutions, but generally these do not interoperate, and are tied to proprietary pro-tocols and products. On the contrary, wide deployment of an Internet based live streaming infrastructure,dictates ease-of-placement of surrogates by operators wishing to support the service; an Open SourceCDN is thus required.

OpenCDN nodeswrap streaming server devices from different vendors ([7, 8, 9]) in acontrol layer,allowing remote control of the device independently of its technology. Acontrol entitycollects the in-formation from nodes about the set of clients they intend to serve, and when a request for live contentarrives, it selects thebestnode that can serve that client. At the same time, the control entity also managesefficientmulti-hop distributionof content, from source origin, through the nodes.

3.2 Design issues and alternatives

Let us briefly describe the main topics to be concerned about, together with the design choices operated.According to the definitions found inRFC 3466 and 3570 fromIETF, a CDN implementation is made ofthe following service components:

• content delivery infrastructure : is the set of relay servers which deliver copies of content to set ofusers. In OpenCDN, a set of relays capable of handling the protocol flavours used by the allowedkind of sources, and of relaying contents toward remote clients, must exist;

• distribution infrastructure : is the mechanism that moves content from the origin server to re-lays, and it needs to be specified in terms ofadvertising(relay capabilities, load conditions, al-lowed clients),replication (how relays propagate content),signalling (how delivery is directed,centralisedvsdistributed, protocols and metadata definition). In OpenCDN, a centralised control-entity (RRDM) receives advertisements from relays, and directs relays to perform replication, bymeans of an ad-hoc protocol based onXML -RPC calls. Replication is then executed by the meansallowed by the actual kind of servers involved;

• request routing infrastructure : is the mechanism that moves a client toward arendezvouswitha surrogate. This could be done atnetwork-level, if a modified DNS[4] performs the variablemapping from the contentURI to the proper surrogate IP. In OpenCDN, request routing is performedat application levelinstead, by letting theannouncement portalask to thecontrol RRDM for CDN

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 6: Academic streaming in Europe: Report on TF-Netcast

HTTP Resp.SetUp Resp.

DoRelay Resp.RTP

RTP

Play

Play

DoRelaySetUp

HTTP GET

Client Last Hop Portal RRDM Origin

Figure 4: OpenCDN operations time diagram (one levelCDN)

service, and re-writing the contact part of the content meta-data with the information returned,indicating the best relay server for a given client.

• accounting infrastructure: is the collection and tracking activity about request routing, distri-bution, and delivery functions, actuated within theCDN. At this early stage of development,OpenCDN does not implement accounting, but simply provides a distributed logging function.

3.3 OpenCDN operation

OpenCDN is made of a set ofRelay Nodes, which performs content delivery and distribution, and acentralised control-entity, namedRequest Routing and Distribution Management(RRDM), whose tasksare

1. record footprint information from nodes;

2. choose which relay(s) is (are) suited to serve the client;

3. direct the relay(s) to pull the stream from the source - if not already done;

4. route the client request to the best/nearest relay;

5. dismantle the distribution tree after a live source stop transmission.

Step 1 happens when a node boots, and registers its capabilities (i.e., the kind of streaming protocolssupported) and footprint1 information with theRRDM. Steps 2-4 happen each time a client makes requestof a live streaming content, by visiting the announcement portal, which asks in turn forCDN Service totheRRDM (see fig. 4).

In step 2,RRDM selects a set of surrogates which can serve the viewer address, and can deal withthe protocol/codec of the origin source, by using the registration information gathered from nodes. Then,RRDM sorts (see 3.6) surrogate candidates according to a ‘specificness’ order. The outcome of step 2may also depend on the already-deployedCDN topology for the same origin, as a surrogate (good forthat client) may already be active, or content can be pulled from a nearby relay, instead of directly fromsource.

Processing performed in step 3 (if present) is driven by the outcome of step 2, and consists inRRDM

directing nodes to pull content as required. Instead of directly controlling all the relays involved,RRDM

only contacts the nodes whose footprints are widest, as explained in sub-section 3.6, selects byUDP

probing the one best suited to become a relay (see 3.8), and sends it the list of allowed downstream nodes.This produces a sort ofcall recursion(see 3.7), the outcome of which (i.e. the set of relays which havebeen activated) is awaited byRRDM, and communicated back by the top-level node it has contacted.

1Footprint is a term borrowed from satellite broadcast, where it indicates the geographical area which is served. In our context,footprint is expressed in terms of a network address prefix, or a domain name suffix, and specifies which set of clients a surrogatewishes to serve. As such, two footprints encompassing the same client address will benested, and thenestingone, will be saidthewidest.

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 7: Academic streaming in Europe: Report on TF-Netcast

RRDM

Register Node

Node

Portal

ServiceRequestTearDownRequest

NoRelay

DoRelay

Teardown

Setup

NoRelay

DoRelay

Figure 5: OpenCDN entities main interfaces

In step 4, the relay and mount pointURI best suited to serve the client request (i.e., the one with thenarrowest footprint) is communicated back byRRDM to the announcement portal, which in turn dynami-cally builds a response page for the client.

Finally in step 5, whenRRDM is notified (by the announcement portal) about the end of transmis-sion for a content origin, it asks the top-level nodes it have contacted before, to propagate a tear-downcommand (see 3.7), thus dismantling the overlay network setup.

3.4 Interfaces

Communications in between the announcement portal, theRRDM, and the nodes, is accomplished byXML -RPC [11] calls, for which a wide set of libraries and modules exists for all the major processinglanguages.

In the RRDM, two methods are implemented (SETUP and TEARDOWN), to be invoked by the an-nouncement portal, and a third (REGISTER) is used for nodes capabilities and footprint advertisement(see fig. 5). Two other methods (STATUS and IPC) are also used for inspection of theRRDM data struc-tures, and for interprocessing communications whenRRDM concurrent operation mode is activated (see3.9).

Within nodes, two methods are implemented (DORELAY and NORELAY ), which can be invoked byRRDM or by other nodes, in order to request for the setup of a relay which pulls content from some remotecontent origin, or from another node, or to request dismantlement of an active relay.

3.5 Vendor independence

OpenCDN implements a vendor-agnostic architecture, suitable for concurrent operation by differentbrands of relay/streaming server devices. This is accomplished by wrapping the actual device used withinan adaptation layer, which communicates with other entities by means ofXML -RPC calls, and directlycontrols the device used, by means of its nativeAPI set of calls. In this way, OpenCDN can be easilyported to different present and future streaming technologies, by writing a new adaptation layer module,provided that it offers the same set ofAPI calls, which are documented in [10]. Fig. 6 sketches the overallnode architecture.

3.6 Hierarchy

Footprint (FP) information has a nice nesting property, from wider to narrower address sets, and set ofnodes which advertise nestedFPs, can be sorted accordingly, from thelessto themore specific, with re-spect to a commonly embraced client address. Moreover, nodes may candidate themselves (at registrationtime) to become aLastHop(LH) or Transit(TR) relay for a givenFP: in the latter case, they will not acceptrequests directly from clients, but only from other relays. These facts allow deployment of an arborealdistribution network, in which a stream is, at first, relayed by the widest (less specific)FP TR, then bysome narrowerFP TR, and then by theLH whoseFP is the narrowest one which contains the client address

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 8: Academic streaming in Europe: Report on TF-Netcast

XML−RPC Interface

OpenCDN Control Layer

OpenCDN Node

Adaptation Layer

HTTPGET

Darwin Streaming Server

Perl Interface

HTML Interface

Adaptation API

XML−RPC Methods

Relay Configuration file

Figure 6: OpenCDN node adapted for Darwin Streaming Server

(i.e. the more specific one). When a new client arrives, its address may already be served by a largeFP

TR, and only new lower level(s) relays may need to be activated.Node behaviour can be characterised in terms oflaziness,by means of a three-valued configuration

option, which can hold one of the values’labor’ , ’fair’ , and’lazy’, and which takes effect only if the nodehas declared itself to be aTransit. In thelabor case, the node will try to contact themore specificLH nodecontained in the candidate list received from upstream, while in thelazycase, it will try to contact thelessspecific Transitnode of the list. Alabor behaviour means that subsequentCDN requests will continue tobe dealt by the same node, while alazybehavior will limit node involvement for future calls to only thosefootprints not already dealt by some of its downstreamers. Finally, in case offair behaviour, the nodewill switch to lazyor labor, depending if node load conditions (CPU, bandwidth used, No. of clients) areexceeding some threshold, or not.

We think that the described solution may help to gracefully scale the number of nodes involved whenthe audience grows, and to keep at the same time the number of hops low, if not many clients request thesame content.

3.7 Distributed processing

As illustrated,RRDM only contacts a FH relay, and delegates it the task of initiating a recurrent calltoward the allowed downstreamers. As this behaviour alleviates the load for theRRDM, it also allows theperformance of load and proximity measures in a hop-by-hop fashion, as explained in the subsection 3.8.

The complete topology information about OpenCDN deployment is held by theRRDM, and nodesonly have to keep track of their own active relay names and mount points, and of the downstream peersthey have contacted. The latter information allows the performance of a completely distributed teardownsequence.

3.8 Metric

During construction of the distribution tree, it may be found that more than one relay candidate hasadvertised the sameFP, for redundancy purposes, load balancing, or simply because of uncoordinateddeployment. In that case, selection of thebestcandidate can be performed by the entity which is holdingcontrol of distribution (RRDM or node) for that hop, by sendingUDP probe packets to all the next-hopcandidates. The latency in answering to the probe is related both to theRTT between nodes, and to thetarget load; so, the node that will be the next hop relay is the one which has answers first. This solutionseems to work very well, and also avoids waiting forTCP timeouts in the case of nodes outages; thus,we decided to adopt itin general, and not only for equal-FP cases, by allowing for a little time-advantageto be given toa-priori preferred next-hop nodes. In this way, nodes behind congested links, or heavilyloaded, can be excluded from participation to the delivery process.

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 9: Academic streaming in Europe: Report on TF-Netcast

3.9 Implementation

OpenCDN is written in Perl, and nodes are built around the Apple Darwin Streaming Server [12]. All thecommunication processing is currently single-threaded, except for theRRDM SETUP method, which canconcurrently serve more than a request at time.UDP probing is done toward more nodes in parallel, byspawning of child processes.

The code is released with an Open Source license, athttp://labtel.ing.uniroma1.it/opencdn, allowingpeople to write new adaptation layer code for different devices, and to freely deploy around more andmore nodes. Actually, more than one node can be executed on the same machine, sharing the same relaydevice, and adhering to differentCDNs, by registering themselves with differentRRDMs.

4 Announcement portal

Contrary to TV and radio stations, the Internet is not organised in channels where programs are orderedsequentially. Due to this organisation it is very time-consuming to find the right live transmission forviewers in any other way than by contacting the organisers. But the organiser does not have a legal possi-bility to contact all potential viewers. So there is a gap, which inhibits effective use of live transmissionson both the broadcasting and the viewer side.

A web portal (http://live.academic.tv/) operated by CESNET is designated to bridge this gap. Theportal is independent of the event organiser and of the broadcasting technology. The announced contentmust be legal, the announcing organisation must have permission to broadcast it, and it must be relevantto the academic and research communities.

4.1 A viewer’s point of view

The portal aims to be truly international. A user can set his time zone and the portal will shift all thedisplayed time information. A user may choose his preferred language and the portal will highlightthe announcements with the chosen language set. Also, if he is lucky enough, the portal will use thelanguage when communicating with him – currently implemented are not only English and German, butalso Czech, Danish, Spanish, Finnish, Greek, Italian and Dutch. The portal will not, however, completelyhide transmissions in other languages as the academic community is – to a great extent – multilingual.

The portal is intended to look and feel very brief, exact and familiar. A user will select a date from themonthly calendar (see Fig. 7) to get the daily “TV grid” where the shows are displayed chronologically.Every show displays its title, some short and long description and links. As the broadcasters are askedto provide links that refer directly to stream source, users are not harassed with reviewing HTML pagesof a broadcaster. That way, interfacing with the portal unifies the user experience. Instead, the displayedinformation should provide the user with basic technical information about the stream so he can decidewhether he is able to actually handle the content (bandwidth, codec limitation, language).

Reminders are a small, yet useful added-value of the portal. If a user enters his e-mail address (of amobile phone for example), he may get a short e-mail notifying him one day, one hour or 15 minutes aheadof scheduled transmission start. As this portal is run independently of broadcasters by an internationalresearch group, there is no fear of misusing e-mail addresses for unsolicited mail.

A list of e-mail announcement lists is another feature of the portal. A user may subscribe to one ormore mailing lists for commonly-used languages in order to receive a brief listing of coming shows in thelanguages selected. E-mails are sent once a week.

General news concerning deployment of the portal, new broadcasting technologies or self-promoare displayed on every webpage as well. Surprisingly, this feature has not been used very much bybroadcasters yet.

4.2 A broadcaster’s point of view

As mentioned earlier, entering an announcement is done by the broadcaster himself. It can be done eithervia a web interface or via SMTP with especially crafted e-mail.

Using the web interface is quite straighforward. After a simple authentication procedure, a broadcasterchooses a date to enter start and end times (in a previously chosen time zone), basic information, e-mailsof a show organiser (to promote the organisation) and an e-mail of the broadcaster (so viewers may

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 10: Academic streaming in Europe: Report on TF-Netcast

Figure 7: The monthly calendar view of the portal.

complain on-line about technical quality). As entering the links regarding streams or HTML links in aflexible manner seemed to be a problematic point, a cowardly escape from the user-friendly interfaceto a SMIL-recalling tagged text area is used. An element “src” can be added ad libitum with attributesdefining the link, language, encoding technology, required min/max bitrates and a splashscreen (a picture,preferrably a small one). If a link points directly to RTSP/MMS source, the portal may on-the-fly createstartup files necessary for smooth integration with a browser upon request.

Last, but not least: a small, yet very practical feature – in order to provide broadcasters with a chanceto easily enter periodic events; a deep copy of the already-entered information can be created and modifiedinstantly.

An SMTP interface (also called “Universal gateway”) lets a user send information in an XML format.In order to authenticate the e-mail origin, PKI technology is used. A small certification authority was setup for that purpose. The broadcaster has to create an S/MIME signed e-mail. That way, the broadcaster’sauthenticity can be verified smoothly even before the XML document is parsed. Spoofed e-mails can bethrown away immediately, while senders of malformed documents can be sent an e-mail warning. Thisefficiently prevents the system from reacting to computer viruses, erroneously sent e-mails, etc.

As mentioned earlier, the announcement itself is an XML, UTF-8 (mandatory – the whole portaloperates strictly with UTF-8 only) document. Its informal specification is available upon request [13]. Ifthe document is well-formed, the user receives, via e-mail, an internal ID of the announcement. This ID(a decimal number) can be used later to modify or delete the announcement. If the document does notmeet the requirements, an error message is mailed back.

5 Acknowledgements

The authors would like to thank all the members ofTF-NETCAST, many of whom were involved in theactivities reported on in this paper.

A. Falaschi, D. Monster†, I. Dolezal, M. Krsek, Selected Papers from the TERENA Networking Conference (2004)

Page 11: Academic streaming in Europe: Report on TF-Netcast