a first look at quality of mobile live streaming experience: the case of periscope ·...

8
A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope Matti Siekkinen School of Science Aalto University, Finland matti.siekkinen@aalto.fi Enrico Masala Control & Comp. Eng. Dep. Politecnico di Torino, Italy [email protected] Teemu Kämäräinen School of Science Aalto University, Finland teemu.kamarainen@aalto.fi ABSTRACT Live multimedia streaming from mobile devices is rapidly gaining popularity but little is known about the QoE they provide. In this paper, we examine the Periscope service. We first crawl the service in order to under- stand its usage patterns. Then, we study the protocols used, the typical quality of experience indicators, such as playback smoothness and latency, video quality, and the energy consumption of the Android application. Keywords Mobile live streaming; QoE; RTMP; HLS; Periscope 1. INTRODUCTION Periscope and Meerkat are services that enable users to broadcast live video to a large number of viewers using their mobile device. They both emerged in 2015 and have since gained popularity fast. Periscope, which was acquired by Twitter before the service was even launched, announced in March 2016 on their one year birthday that over 110 years of live video was watched every day with the application [13]. Also Facebook has recently launched a rival service called Facebook Live. Very little details have been released about how these streaming systems work and what kind of quality of experience (QoE) they deliver. One particular challenge they face is to provide low latency stream to clients because of the features that allow feedback from viewers to the broadcaster in form of a chat, for example. Such interaction does not exist with “traditional” live video streaming systems and it has implications on the system design (e.g., protocols to use). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. IMC 2016, November 14 - 16, 2016, Santa Monica, CA, USA c 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-4526-2/16/11. . . $15.00 DOI: http://dx.doi.org/10.1145/2987443.2987472 We have measures the Periscope service in two ways. We first created a crawler that queries the Periscope API for ongoing live streams and used the gathered data of about 220K distinct broadcasts to analyze the usage patterns. Second, we automated the process of viewing Periscope broadcasts with an Android smartphone and generated a few thousand viewing sessions while logging various kinds of data. Using this data we examined the resulting QoE. In addition, we analyzed the video quality by post processing the video data extracted from the traffic captures. Finally, we studied the application induced energy consumption on a smartphone. Our key findings are the following: 1) 2 Mbps appears to be the key boundary for access network bandwidth below which startup latency and video stalling clearly increase. 2) Periscope appears to use the HLS proto- col when a live broadcast attracts many participants and RTMP otherwise. 3) HLS users experience a longer playback latency for the live streams but typically fewer stall events. 4) The video bitrate and quality are very similar for both protocols and may exhibit significant short-term variations that can be attributed to extreme time variability of the captured content. 5) Like most video apps, Periscope is power hungry but, surprisingly, the power consumption grows dramatically when the chat feature is turned on while watching a broadcast. The causes are significantly increased traffic and ele- vated CPU and GPU load. 2. METHODS AND DATA COLLECTION The Periscope app communicates with the servers us- ing an API that is private in that the access is protected by SSL. To get access to it, we set up a so called SSL- capable man-in-the-middle proxy, i.e. mitmproxy [12], in between the mobile device and the Periscope ser- vice as a transparent proxy. The proxy intercepts the HTTPS requests sent by the mobile device and pretends to be the server to the client and to be the client to the server. The proxy enables us to examine and log the ex- change of requests and responses between the Periscope client and servers. The Periscope iOS app uses the so called certificate pinning in which the certificate known arXiv:1605.04270v2 [cs.MM] 3 Oct 2016

Upload: others

Post on 05-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

A First Look at Quality of Mobile Live StreamingExperience: the Case of Periscope

Matti SiekkinenSchool of Science

Aalto University, [email protected]

Enrico MasalaControl & Comp. Eng. Dep.Politecnico di Torino, Italy

[email protected]

Teemu KämäräinenSchool of Science

Aalto University, [email protected]

ABSTRACTLive multimedia streaming from mobile devices is rapidlygaining popularity but little is known about the QoEthey provide. In this paper, we examine the Periscopeservice. We first crawl the service in order to under-stand its usage patterns. Then, we study the protocolsused, the typical quality of experience indicators, suchas playback smoothness and latency, video quality, andthe energy consumption of the Android application.

KeywordsMobile live streaming; QoE; RTMP; HLS; Periscope

1. INTRODUCTIONPeriscope and Meerkat are services that enable users

to broadcast live video to a large number of viewersusing their mobile device. They both emerged in 2015and have since gained popularity fast. Periscope, whichwas acquired by Twitter before the service was evenlaunched, announced in March 2016 on their one yearbirthday that over 110 years of live video was watchedevery day with the application [13]. Also Facebook hasrecently launched a rival service called Facebook Live.

Very little details have been released about how thesestreaming systems work and what kind of quality ofexperience (QoE) they deliver. One particular challengethey face is to provide low latency stream to clientsbecause of the features that allow feedback from viewersto the broadcaster in form of a chat, for example. Suchinteraction does not exist with “traditional” live videostreaming systems and it has implications on the systemdesign (e.g., protocols to use).

Permission to make digital or hard copies of all or part of this work for personalor classroom use is granted without fee provided that copies are not made ordistributed for profit or commercial advantage and that copies bear this noticeand the full citation on the first page. Copyrights for components of this workowned by others than the author(s) must be honored. Abstracting with credit ispermitted. To copy otherwise, or republish, to post on servers or to redistribute tolists, requires prior specific permission and/or a fee. Request permissions [email protected].

IMC 2016, November 14 - 16, 2016, Santa Monica, CA, USAc© 2016 Copyright held by the owner/author(s). Publication rights licensed to

ACM. ISBN 978-1-4503-4526-2/16/11. . . $15.00

DOI: http://dx.doi.org/10.1145/2987443.2987472

We have measures the Periscope service in two ways.We first created a crawler that queries the PeriscopeAPI for ongoing live streams and used the gathered dataof about 220K distinct broadcasts to analyze the usagepatterns. Second, we automated the process of viewingPeriscope broadcasts with an Android smartphone andgenerated a few thousand viewing sessions while loggingvarious kinds of data. Using this data we examinedthe resulting QoE. In addition, we analyzed the videoquality by post processing the video data extracted fromthe traffic captures. Finally, we studied the applicationinduced energy consumption on a smartphone.

Our key findings are the following: 1) 2 Mbps appearsto be the key boundary for access network bandwidthbelow which startup latency and video stalling clearlyincrease. 2) Periscope appears to use the HLS proto-col when a live broadcast attracts many participantsand RTMP otherwise. 3) HLS users experience a longerplayback latency for the live streams but typically fewerstall events. 4) The video bitrate and quality are verysimilar for both protocols and may exhibit significantshort-term variations that can be attributed to extremetime variability of the captured content. 5) Like mostvideo apps, Periscope is power hungry but, surprisingly,the power consumption grows dramatically when thechat feature is turned on while watching a broadcast.The causes are significantly increased traffic and ele-vated CPU and GPU load.

2. METHODS AND DATA COLLECTIONThe Periscope app communicates with the servers us-

ing an API that is private in that the access is protectedby SSL. To get access to it, we set up a so called SSL-capable man-in-the-middle proxy, i.e. mitmproxy [12],in between the mobile device and the Periscope ser-vice as a transparent proxy. The proxy intercepts theHTTPS requests sent by the mobile device and pretendsto be the server to the client and to be the client to theserver. The proxy enables us to examine and log the ex-change of requests and responses between the Periscopeclient and servers. The Periscope iOS app uses the socalled certificate pinning in which the certificate known

arX

iv:1

605.

0427

0v2

[cs

.MM

] 3

Oct

201

6

Page 2: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

to be used by the server is hard-coded into the client.Therefore, we only use the Android app in this study.

We used both Android emulators (Genymotion [3])and smartphones in the study. We generated two datasets. For the first one, we used an Android emula-tor and developed an inline script for the mitmproxythat crawls through the service by continuously query-ing about the ongoing live broadcasts. The obtaineddata was used to analyze the usage patterns (Sec. 4).

The second dataset was generated for QoE analysis(Sec. 5) by automating the broadcast viewing process ona smartphone. The app has a “Teleport” button whichtakes the user directly to a randomly selected live broad-cast. Automation was achieved with a script that sendstap events through Android debug bridge (adb) to pushthe Teleport button, wait for 60s, push the close button,push the “home” button and repeat all over again. Thescript also captures all the video and audio traffic usingtcpdump. Meanwhile, we ran another inline script withmitmproxy that dumped for each broadcast viewed adescription and playback statistics, such as delay andstall events, which the application reports to a serverat the end of a viewing session. It is mainly useful forthose streaming sessions that use the RTMP protocolbecause after an HTTP Live Streaming (HLS) session,the app reports only the number of stall events. Wealso reconstruct the video data of each session and ana-lyze it using a variety of scripts and tools. After findingand reconstructing the multimedia TCP stream usingwireshark [19], single segments are isolated by savingthe response of HTTP GET request which contains anMPEG-TS file [5] ready to be played. For RTMP, weexploit the wireshark dissector which can extract theaudio and video segments. The libav [10] tools havebeen used to inspect the multimedia content and de-code the video in full for the analysis of Sec. 5.2.

In the automated viewing experiments, we used twodifferent phones: Samsung Galaxy S3 and S4. Thephones were located in Finland and connected to theInternet by means of reverse tethering through a USBconnection to a Linux desktop machine providing themwith over 100Mbps of available bandwidth both up anddown stream. In some experiments, we imposed arti-ficial bandwidth limits with the tc command on theLinux host. For latency measurement purposes (Sec-tion 5.1), NTP was enabled on the desktop machineand used the same server pool as the Periscope app.

3. PERISCOPE OVERVIEWPeriscope enables users to broadcast live video for

other users to view it. Both public and private broad-casting is available. Private streams are only viewableby chosen users. Viewers can use text chat and emoti-cons to give feedback to the broadcaster. The chat be-comes full when certain number of viewers have joinedafter which new joining users cannot send messages.Broadcasts can also be made available for replay.

Table 1: Relevant Periscope API commands.

API request request contents response con-tents

mapGeoBroadcastFeed Coordinates of arectangle shapedgeographical area

List of broadcastslocated inside thearea

getBroadcasts List of 13-characterbroadcast IDs

Descriptions ofbroadcast IDs(incl. nb of view-ers)

playbackMeta Playback statistics nothing

A user can discover public broadcasts in three ways:1) The app shows a list of about 80 ranked broadcastsin addition to a couple of featured ones. 2) The user canexplore the map of the world in order to find a broadcastin a specific geographical region. The map shows onlya fraction of the broadcasts available in a large regionand more broadcasts become visible as the user zoomsin. 3) The user can click on the “Teleport” button tostart watching a randomly selected broadcast.

Since the API is not public, we examined the HTTPrequests and responses while using the app through themitmproxy in order to understand how the API works.The application communicates with the servers by send-ing POST requests containing JSON encoded attributesto the following address: https://api.periscope.tv/api/v2/apiRequest. The apiRequest and its contents varyaccording to what the application wants to do. Re-quests relevant to this study are listed in Table 1.

Periscope uses two kinds of protocols for the videostream delivery: Real Time Messaging Protocol (RTMP)using port 80 and HTTP Live Streaming (HLS) becauseRTMP enables low latency (Section 5), while HLS is em-ployed to meet scalability demands. Also Facebook Liveuses the same set of protocols[8]. Further investigationreveals that the RTMP streams are always delivered byservers running on Amazon EC2 instances. For exam-ple, the IP address that the application got when re-solving vidman-eu-central-1.periscope.tv gets mappedto ec2-54-67-9-120.us-west-1.compute.amazonaws.comwhen performing a DNS reverse lookup. In contrast,HLS video segments are delivered by Fastly CDN. RTMPstreams use only one connection, whereas HLS maysometimes use multiple connections to different serversin parallel to fetch the segments, possibly for load bal-ancing and/or resilience reasons. We study the logic ofselecting the protocol and its impact on user experiencein Section 5. Public streams are delivered using plain-text RTMP and HTTP, whereas the private broadcaststreams are encrypted using RTMPS and HTTPS forHLS. The chat uses Websockets to deliver messages.

4. ANALYSIS OF USAGE PATTERNSWe first wanted to learn about the usage patterns of

Periscope. The application does not provide a complete

Page 3: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

areas queried0 20 40 60

live b

roadcasts

found

0

1000

2000

3000

4000

(a) absolute

areas queried (%)0 50 100liv

e b

roadcasts

found (

%)

0

20

40

60

80

100

(b) relative

Figure 1: Cumulative number of broadcasts dis-covered as a function of crawled areas (# of re-quests). Each curve corresponds to a differentdeep crawl.

list of broadcasts and the user needs to explore the ser-vice in ways described in the previous section. In lateMarch of 2016, over 110 years of live video were watchedevery day through Periscope [13], which roughly trans-lates into 40K live broadcasts ongoing all the time.

We developed a crawler by writing a mitmproxy in-line script that exploits the /mapGeoBroadcastFeed re-quest of the Periscope API. The script intercepts therequest made by the application after being launchedand replays it repeatedly in a loop with modified coor-dinates and writes the response contents to a file. Italso sets the include_replay attribute value to falsein order to only discover live broadcasts. In addition,the script intercepts /getBroadcasts requests and re-places the contents with the broadcast IDs found by thecrawler since previous request and extracts the viewerinformation from the response to a file.

We faced two challenges: First, we noticed that whenspecifying a smaller area, i.e. when user zooms in themap, new broadcasts are discovered for the same area.Therefore, to find a large fraction of the broadcasts,the crawler must explore the world using small enoughareas. Second, Periscope servers use rate limiting sothat too frequent requests will be answered with HTTP429 (“Too many requests”), which forces us to pace therequests in the script and increases the completion timeof a crawl. If the crawl takes a long time, it will missbroadcast information.

Our approach is to first perform a deep crawl andthen to select only the most active areas from that crawland query only them, i.e., perform a targeted crawl.The reason is that deep crawl alone would produce toocoarse grained data about duration and popularity ofbroadcasts because it takes over 10 minutes to finish. Indeep crawl, the crawler zooms into each area by dividingit into four smaller areas and recursively continues do-ing that until it no longer discovers substantially morebroadcasts. Such a crawl finds 1K-4K broadcasts1. Fig-ure 1 shows the cumulative number of broadcasts found

1This number is much smaller than the assumed 40K to-

0.00

0.25

0.50

0.75

1.00

0.1 1 10 100 10001000duration (min) / avg viewers

frac

tion

of b

road

cast

s

durationviewers

(a) duration and viewers

0

5

10

15

20

0 5 10 15 20local time of day (h)

avg

view

ers

per

brdc

st

(b) viewers vs. start time

Figure 2: Broadcasting and viewers.

as a result of crawls performed at different times of day.Figure 1(b) reveals that for all the different crawls, halfof the areas contain at least 80% of all the broadcastsdiscovered in the crawl. We select those areas from eachcrawl, 64 areas in total, for a targeted crawl. We dividethem into four sets assigned to four different simulta-neously running crawlers, i.e., four emulators runningPeriscope with different user logged in (avoids rate lim-iting) that repeatedly query the assigned areas. Suchtargeted crawl completes in about 50s.

Figure 2 plots duration and viewing statistics aboutfour different 4h-10h long targeted crawls started at dif-ferent times of the day (note: both variables use thesame scale). Broadcast duration was calculated by sub-tracting its start time (included in the description) fromthe timestamp of the last moment the crawler discov-ered the broadcast. Only broadcasts that ended duringthe crawl were included (must not have been discov-ered during the last 60s of a crawl) totalling to about220K distinct broadcasts. Most of the broadcasts lastbetween 1 and 10 minutes and roughly half are shorterthan 4 minutes. The distribution has a long tail withsome broadcasts lasting for over a day.

The crawler gathered viewer information about 134Kbroadcasts. Over 90% of broadcasts have less than 20viewers on average but some attract thousands of view-ers. It would be nice to know the contents of the mostpopular broadcasts but the text descriptions are typi-cally not very informative. Over 10% of broadcasts haveno viewers at all and over 80% of them are unavailablefor replay afterwards (replay information is contained inthe descriptions we collect about each broadcast), whichmeans that no one ever saw them. They are typicallymuch shorter than those that have viewers (avg dura-tions 2min vs. 13 min) although some last for hours.They represent about 2% of the total tracked broadcasttime. The local time of day shown in Figure 2(b) is de-termined based on the broadcaster’s time zone. Someviewing patterns are visible, namely a notable slumpin the early hours of the day, a peak in the morning,and an increasing trend towards midnight, which sug-

tal broadcasts but we miss private broadcasts and thosewith location undisclosed.

Page 4: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

gest that broadcasts typically have local viewers. Thismakes sense especially from the language preferencespoint of view. Besides the difference between broad-casts with and without any viewers, the popularity isonly very weakly correlated with its duration.

5. QUALITY OF EXPERIENCEIn this section, we study the data set generated through

automated viewing with the Android smartphones. Itconsisted of streaming sessions with and without band-width limit to the Galaxy S3 and S4 devices. We havedata of 4615 sessions in total: 1796 RTMP and 1586HLS sessions without a bandwidth limit and 18-91 ses-sions for each specific bandwidth limit. Since the num-ber of recorded sessions is limited, the results shouldbe taken as indicative. The fact that our phone had ahigh-speed non-mobile Internet access means that typi-cal users may experience worse QoE because of a slowerand more variable Internet access with longer latency.

HLS seems to be used only when a broadcast is verypopular. A comparison of the average number of view-ers seen in an RTMP and HLS session suggests thatthe boundary number of viewers beyond which HLS isused is somewhere around 100 viewers. By examiningthe IP addresses from which the video was received,we noticed that 87 different Amazon servers were em-ployed to deliver the RTMP streams. We could locateonly nine of them using maxmind.com, but among thosenine there were at least one in each continent, exceptfor Africa, which indicates that the server is chosenbased on the location of the broadcaster. All the HLSstreams were delivered from only two distinct IP ad-dresses, which maxmind.com says are located somewherein Europe and in San Francisco. We do not currentlyknow how the video gets embedded into an HLS streamfor popular broadcasts but we assume that the RTMPstream gets possibly transcoded, repackaged, and de-livered to Fastly CDN by Periscope servers. The factthat we used a single measurement location explainsthe difference in server locations observed between theprotocols. As confirmed by analysis in [18], the RTMPserver nearest to the broadcasting device is chosen whenthe broadcast is initialized, while the Fastly CDN serveris chosen based on the location of the viewing device.

Since we had data from two different devices, we per-formed a number of Welch’s t-tests in order to under-stand whether the data sets differ significantly. Onlythe frame rate differs statistically significantly betweenthe two datasets. Hence, we combine the data in thefollowing analysis of video stalling and latency.

5.1 Playback Smoothness and LatencyWe first look at playback stalling. For RTMP streams,

the app reports the number of stall events and the av-erage stall time of an event, while for HLS it only re-ports the number of stall events. The stall ratio plottedfor the RTMP streams in Figure 3(a) is calculated as

0.00

0.25

0.50

0.75

1.00

0.00 0.25 0.50 0.75 1.00stall ratio

frac

tion

of b

road

cast

s

(a) no bandwidth limiting

0.00

0.25

0.50

0.75

1.00

0.5 1 2 3 4 5 6 7 8 9 10100bandwidth limit (Mbps)

stal

l rat

io

(b) bandwidth limiting

Figure 3: Analysis of the stall ratio for RTMPstreams with and without bandwidth limiting.

summed up stall time divided by the total stream dura-tion including stall and playback time. The bandwidthlimit 100 in the figure refers to the unlimited case. Moststreams do not stall but there is a notable number ofsessions with stall ratio of 0.05-0.09, which correspondsusually to a single stall event that lasts roughly 3-5s.The boxplots in Figure 3(b) suggest that a vast major-ity of the broadcasts are streamed with a bitrate inferiorto 2 Mbps because with access bandwidth greater thanthat, the broadcasts exhibited very little stalling. Asfor the broadcasts streamed using HLS, comparing theirstall count to that of the RTMP streams indicates thatstalling is rarer with HLS than with RTMP, which maybe caused by HLS being an adaptive streaming protocolcapable for quality switching on the fly.

The average video bitrate is usually between 200 and400 kbps (see Section 5.2), which is much less than the2 Mbps limit we found. The most likely explanationto this discrepancy is the chat feature. We measuredthe phone traffic with and without chat and observed asubstantial increase in traffic when the chat was on. Acloser look revealed that the JSON encoded chat mes-sages are received even when chat is off, but when thechat is on, image downloads from Amazon S3 servers ap-pear in the traffic. The reason is that the app downloadsprofile pictures of chatting users and displays them nextto their messages, which may cause a dramatic increasein the traffic. For instance, we saw an increase of the ag-gregate data rate from roughly 500kbps to 3.5Mbps inone experiment. The precise effect on traffic depends onthe number of chatting users, their messaging rate, thefraction of them having a profile picture, and the formatand resolution of profile pictures. We also noticed thatsome pictures were downloaded multiple times, whichindicates that the app does not cache them.

Each broadcast was watched for exactly 60s from themoment the Teleport button was pushed. We calculatethe join time, often also called startup latency, by sub-tracting the summed up playback and stall time from60s and plot it in Figure 4(a) for the RTMP streams. Inaddition, we plot in Figure 4(b) the playback latency,which is equivalent to the end-to-end latency. The y-

Page 5: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

0

20

40

60

0.5 1 2 3 4 5 6 7 8 9 10100bandwidth limit (Mbps)

join

tim

e (s

)

(a) join time

0

5

10

15

20

25

0.5 1 2 3 4 5 6 7 8 9 10100bandwidth limit (Mbps)

play

back

late

ncy

(s)

(b) playback latency

Figure 4: Boxplots showing that playback la-tency and join time of RTMP streams increasewhen bandwidth is limited. Notice the differ-ence in scales.

axis scale was cut leaving out some outliers that rangedup to 4min in the case of playback latency. Both in-crease when bandwidth is limited. In particular, jointime grows dramatically when bandwidth drops to 2Mbpsand below. The average playback latency was roughlya few seconds when the bandwidth was not limited.

0.00

0.25

0.50

0.75

1.00

0.01 0.1 0.3 1 3 10video delivery latency (s)

frac

tion

of b

rdcs

ts

HLSRTMP

Figure 5: Video delivery latency is much longerwith HSL compared to RTMP.

Through experiments where we controlled both thebroadcasting and receiving client and captured both de-vices traffic, we noticed that the broadcasting client ap-plication regularly embeds an NTP timestamp into thevideo data, which is subsequently received by each view-ing client. The experiments indicated that the NTPtimestamps transmitted by the broadcasting device isvery close to the tcpdump timestamps in a trace cap-tured by a tethering machine. Hence, the timestampsenable calculating the delivery latency by subtractingthe NTP timestamp value from the time of receivingthe packet containing it, also for the HLS sessions forwhich the playback metadata does not include it. Wecalculate the average over all the latency samples foreach broadcast. Figure 5 shows the distribution of thevideo delivery latency for the sessions that were notbandwidth limited. Even if our packet capturing ma-chine was NTP synchronized, we sometimes observedsmall negative time differences indicating that the syn-chronization was imperfect. Nevertheless, the results

0.0

0.2

0.4

0.6

0.8

1.0

0 0.25 0.5 0.75 1 1.25

fract

ion

of v

ideo

s

bitrate (Mbit/s)

HLSRTMP

(a)

10

20

30

40

50

0 0.25 0.5 0.75 1 1.25

avg

QP

bitrate (Mbit/s)

(b)

Figure 6: Characteristics of the captured videos.

clearly demonstrate the impact of using HLS on thedelivery latency. RTMP stream delivery is very fasthappening in less than 300ms for 75% of broadcasts onaverage, which means that the majority of the few sec-onds of playback latency with those streams comes frombuffering. In contrast, the delivery latency with HLSstreams is over 5s on average. As expected, the deliv-ery latency grows when bandwidth is limited similarlyto the playback latency. A more detailed analysis of thelatency can be found in [18]. The delivery latency weobserved matches quite well with their delay breakdownresults (end-to-end delay excluding buffering).

In summary, HLS appears to be a fallback solutionto the RTMP stream. The RTMP servers can pushthe video data directly to viewers right after receivingit from the broadcasting client. HLS delivery requiresthe data to be packaged in complete segments, possi-bly while transcoding it to multiple qualities, and theclient application needs to separately request for eachvideo segment, which all adds up to the latency. HLSdoes produce fewer stall events but we have seen no evi-dence of the video bitrate being adapted to the availablebandwidth (Section 5.2). It is possible that the buffersizing strategy causes the difference in the number ofstall events between the two protocols but we cannotconfirm this at the moment.

5.2 Audio and Video QualityBoth RTMP and HLS communications employ stan-

dard codecs for audio and video, that is, AAC (Ad-vanced Audio Coding) for audio [7] and AVC (AdvancedVideo Coding) for video [6]. In more details, audio issampled at 44,100 Hz, 16 bit, encoded in Variable BitRate (VBR) mode at about either 32 or 64 kbps, whichseems enough to transmit almost any type of audio con-tent (e.g., voice, music, etc.) with the quality expectedfrom capturing through a mobile device.

Video resolution is always 320×568 (or vice versa de-pending on orientation). The video frame rate is vari-able, up to 30 fps. Occasionally, some frames are miss-ing hence concealment must be applied to the decodedvideo. This is probably due to the fact that the upload-ing device had some issues, e.g., glitches in the real-timeencoding or during upload.

Page 6: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

Fig. 6(a) shows the video bitrate, typically rangingbetween 200 and 400 kbps. Moreover, there is almost nodifference between HLS and RTMP except for the maxi-mum bitrate which is higher for RTMP. Analysis of suchcases reveals that poor efficiency coding schemes havebeen used (e.g., I-type frames only). The most commonsegment duration with HLS is 3.6 s (60% of the cases),and it ranges between 3 and 6 s. However, the corre-sponding bitrate can vary significantly. In fact, in realapplications rate control algorithms try to keep its aver-age close to a given target, but this is often challengingas changes in the video content directly influences howdifficult is to achieve such bitrate. To this aim, the socalled quantization parameter (QP) is dynamically ad-justed [2]. In short, the QP value determines how manydetails are discarded during video compression, hence itcan be used as a very rough indication of the quality ofa given video segment. Note that the higher the QP,the lower the quality and vice versa.

To investigate quality, we extracted the QP and com-puted its average value for all the videos. Fig. 6(b)shows the QP vs bitrate for each captured video (thewhole video for RTMP and each segment for HLS).When the quality (i.e., QP value) is roughly the same,the bitrate varies in a large range. On one hand, thisis an indication that the type of content strongly differamong the streams. For instance, some of them featurevery static content such as one person talking on a staticbackground while others show, e.g., soccer matches cap-tured from a TV screen. On the other hand, observinghow the bitrate and average QP values vary over timemay provide interesting indications on the evolution ofthe communication, e.g., hints about whether represen-tation changes are used. Unfortunately, we are cur-rently unable to draw definitive conclusions since thevariation could also be due to significant changes in thevideo content which should be analyzed in more depth.

Finally, we investigated the frame type pattern usedfor encoding. Most use a repeated IBP scheme. Fewencodings (20.0 % for RTMP and 18.4% for HLS) onlyemploy I and P frames only (or just I in 2 cases). Afterabout 36 frames, a new I frame is inserted. Althoughone B frame inserts a delay equal to the duration of theframe itself, in this case we speculate that the reasonthey are not present in some streams could be that someold hardware might not support them for encoding.

5.3 Power ConsumptionWe connected a Samsung Galaxy S4 4G+ smartphone

to a Monsoon Power Monitor [1] in order to measure itspower consumption as instructed in [17]. We used thePowerTool software to record the data measured by thepower monitor and to export it for further analysis.

The screen brightness was full in all test cases andthe sound was off. The phone was connected to theInternet through non-commercial WiFi and LTE net-

Pow

er c

onsu

mpt

ion

(mW

)0

1000

2000

3000

4000

5000

Homescreen

App on Video on(not live)

Video on(RTMP/chat off)

Video on(HLS/

chat off)

Video on(HLS/

chat on)

Broadcast

1067 1006

1673

21592303

3120

2268

2959

2400

3033

4169

4540

3594

4383WiFiLTE

Figure 7: Average power consumption withPeriscope and idle device.

works2. Figure 7 shows the results. We measured theidle power draw in the Android application menu tobe around 900 to 1000 mW both with WiFi and LTEconnections. With the Periscope app on without videoplayback, the power draw grows already to 1537 mWwith WiFi and to 2102 mW with LTE because the ap-plication refreshes the available videos every 5 seconds.

Playing back old recorded videos with the applicationconsume an equal amount of power as playing back livevideos. The power consumption difference of RTMPvs HLS is also very small. Interestingly, enabling thechat feature of the Periscope videos raises the powerconsumption to 2742 mW with WiFi and up to 3599mW with LTE. This is only slightly less than whenbroadcasting from the application. However, the testbroadcasts had no chat displayed on the screen.

We further investigated the impact of the chat fea-ture by monitoring CPU and GPU activity and networktraffic. Both processors use DVFS to scale power drawto dynamic workload [17]. We noticed an increase byroughly one third in the average CPU and GPU clockrates when the chat is enabled, which implies higherpower draw by both processors. Recall from Section 5.1that the chat feature may increase the amount of traf-fic, especially with streams having an active chat, whichinevitably increases the energy consumed by wirelesscommunication. The energy overhead of chat could bemitigated by caching profile pictures and allowing usersto disable their display in the chat.

6. RELATED WORKLive mobile streaming is subject to increasing atten-

tion, including from the sociological point of view [14].In the technical domain, research about live streamingfocused on issues such as distribution optimization [11],including scenarios with direct communication among

2It is a full-fledged LTE network operated by Nokia.DRX was enabled with typical timer configuration.

Page 7: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

devices [22]. The crowdsourcing of the streaming activ-ity itself also received particular attention [4, 21].

Stohr et al. have analyzed the YouNow service [15]and Tang et al. investigated the role of human factors inMeerkat and Periscope [16]. Little is known, however,about how such mobile applications perform. Most ofthe research has focused on systems where the mobiledevice is only the receiver of the live streaming, likeTwitch.Tv [20], or other mobile VoD systems [9].

We believe that this work together with the work ofWang et al. [18] are the first to provide measurement-based analyses on the anatomy and performance of apopular mobile live streaming application. Wang et al.thoroughly study the delay and its origins but, similarto us, also show results on usage patterns and videostalling, particularly the impact of buffer size. Theyalso reveal a particular vulnerability in the service.

7. CONCLUSIONSWe explored the Periscope service providing insight

on some key performance indicators. Both usage pat-terns and technical characteristics of the service (e.g.,delay and bandwidth) were addressed. In addition, theimpact of using such a service on the mobile devices wasstudied through the characterization of the energy con-sumption. We expect that our findings will contributeto a better understanding of how the challenges of mo-bile live streaming are being tackled in practice.

8. ACKNOWLEDGMENTSThis work has been financially supported by the Academy

of Finland, grant numbers 278207 and 297892, and theNokia Center for Advanced Research.

9. REFERENCES[1] Monsoon: www.msoon.com.

[2] Z. Chen and K. N. Ngan. Recent advances in ratecontrol for video coding. Signal Processing: ImageCommunication, 22(1):19–38, 2007.

[3] Genymotion: https://www.genymotion.com/.

[4] Q. He, J. Liu, C. Wang, and B. Li. Coping withheterogeneous video contributors and viewers incrowdsourced live streaming: A cloud-basedapproach. IEEE Transactions on Multimedia,18(5):916–928, May 2016.

[5] ISO/IEC 13818-1. MPEG-2 Part 1 - Systems,Oct. 2007.

[6] ISO/IEC 14496-10 & ITU-T H.264. AdvancedVideo Coding (AVC), May 2003.

[7] ISO/IEC 14496-3. MPEG-4 Part 3 - Audio, Dec.2005.

[8] F. Larumbe and A. Mathur. Under the hood:Broadcasting live video to millions. https://code.facebook.com/posts/1653074404941839/under-the-hood-broadcasting-live-video-to-millions/,Dec. 2015.

[9] Z. Li, J. Lin, M.-I. Akodjenou, G. Xie, M. A.Kaafar, Y. Jin, and G. Peng. Watching videosfrom everywhere: a study of the PPTV mobileVoD system. In Proc. of the 2012 ACM conf. onInternet Measurement Conference, pages 185–198.ACM, 2012.

[10] LibAV Project: https://libav.org/.

[11] T. Lohmar, T. Einarsson, P. Frojdh, F. Gabin,and M. Kampmann. Dynamic adaptive HTTPstreaming of live content. In World of Wireless,Mobile and Multimedia Networks (WoWMoM),2011 IEEE International Symposium on a, pages1–8, June 2011.

[12] Mitmproxy Project: https://mitmproxy.org/.

[13] Periscope. Year one. https://medium.com/@periscope/year-one-81c4c625f5bc#.mzobrfpig,Mar. 2016.

[14] D. Stewart and J. Littau. Up, periscope: Mobilestreaming video technologies, privacy in public,and the right to record. Journalism and MassCommunication Quarterly, Special Issue:Information Access and Control in an Age of BigData, 2016.

[15] D. Stohr, T. Li, S. Wilk, S. Santini, andW. Effelsberg. An analysis of the younow livestreaming platform. In Local Computer NetworksConference Workshops (LCN Workshops), 2015IEEE 40th, pages 673–679, Oct 2015.

[16] J. C. Tang, G. Venolia, and K. M. Inkpen.Meerkat and periscope: I stream, you stream,apps stream for live streams. In Proceedings of the2016 CHI Conference on Human Factors inComputing Systems, CHI ’16, pages 4770–4780,New York, NY, USA, 2016. ACM.

[17] S. Tarkoma, M. Siekkinen, E. Lagerspetz, andY. Xiao. Smartphone Energy Consumption:Modeling and Optimization. Cambridge UniversityPress, 2014.

[18] B. Wang, X. Zhang, G. Wang, H. Zheng, andB. Y. Zhao. Anatomy of a personalizedlivestreaming system. In Proc. of the 2016 ACMConference on Internet Measurement Conference(IMC), 2016.

[19] Wireshark Project: https://www.wireshark.org/.

[20] C. Zhang and J. Liu. On crowdsourced interactivelive streaming: A twitch.tv-based measurementstudy. In Proceedings of the 25th ACM Workshopon Network and Operating Systems Support forDigital Audio and Video, NOSSDAV ’15, pages55–60, New York, NY, USA, 2015. ACM.

[21] Y. Zheng, D. Wu, Y. Ke, C. Yang, M. Chen, andG. Zhang. Online cloud transcoding anddistribution for crowdsourced live game videostreaming. IEEE Transactions on Circuits andSystems for Video Technology, PP(99):1–1, 2016.

[22] L. Zhou. Mobile device-to-device video

Page 8: A First Look at Quality of Mobile Live Streaming Experience: the Case of Periscope · 2016-10-04 · Periscope uses two kinds of protocols for the video stream delivery: Real Time

distribution: Theory and application. ACMTrans. Multimedia Comput. Commun. Appl.,12(3):38:1–38:23, Mar. 2016.