telemetry indepth

23
Telemetry in Depth Tianyou Li [email protected]

Upload: tianyou-li

Post on 22-Nov-2014

147 views

Category:

Software


6 download

DESCRIPTION

brief introduction of Telemetry performance framework in code level

TRANSCRIPT

Page 1: Telemetry indepth

Telemetry in Depth

Tianyou Li

[email protected]

Page 2: Telemetry indepth

Agenda

Overview

Methodology

High Level Charts

Low Level Details

Q & A

Page 3: Telemetry indepth

Overview

What is Telemetry? Telemetry is Chrome's performance testing framework. It allows you to perform arbitrary actions on a set of web

pages and report metrics about it. - http://www.chromium.org/developers/telemetry

Supported Platform Target: ChromeOS (did not try Android though the official doc mentioned) Host: Linux (did not try others though the official doc mentioned)

Goal Get familiar with code layout and structures Understand the control and data flow about Telemetry performance test framework Know how the result is collected, calculated and reported

Non-goal WPR Dev Tools Remote Debugging Protocol Every thing(class) else not in example case Write a new case – will cover in another slides, soon…

3

Page 4: Telemetry indepth

Methodology

Example Driven

Top-Down

Driven by Questions

4

Page 5: Telemetry indepth

What is the example we are going through?

5

Page 6: Telemetry indepth

Example

Host Ubuntu 12.04 + LiClipse + Chromium*

Command line ./run_benchmark --browser=cros-chrome --remote=<the chromebook ip> --output-format=csv --reset-results smoothness.tough_canvas_cases

* commit 921029a5e539df5716417516d2e6096bfbb6586e

Page 7: Telemetry indepth

What are the codes we are going through?

Page 8: Telemetry indepth

The code structure

tools/telemetry

tools/perf

8

Page 9: Telemetry indepth

tools/telemetry

Page 10: Telemetry indepth

tools/telemetry/telemetry/core

browser.py

webpagereplay.py

browser_finder.py

cros_forwarder.py

backends/*

backends/chrome/*

backends/chrome/cros_interface.py

10

Page 11: Telemetry indepth

tools/perf

run_benchmark(.py)

benchmark/smoothness.py

measurements/smoothness.py

page_sets/smoothness.py

11

Page 12: Telemetry indepth

Relations

12

Page 13: Telemetry indepth

What are the basic concepts in Telemetry

13

Page 15: Telemetry indepth

How those concepts connected in code?

15

Page 16: Telemetry indepth

Overall

run_benchmark test_runner smoothness(benchmark)

•composite measurement, page_set

page_runner

•run measurement for each page

•collect and output result

16

Page 17: Telemetry indepth

page_runner

Setup result according to --output-format

• csv_page_measurement_results.py

Find browser according to --browser

• browser_finder to get suitable browser executable in target

• cros_interface.py to ssh, find ‘/opt/google/chrome/chrome’

Prepare resource according to page set

• Check page set present

• Check wpr options

• Check page achieve present, download if version change

Start WPR server

• Webpagereplay.py

• Get wpr server port #

Setup ssh reverse tunnel for target http(s) request/response

• cros_forwarder.py

Start browser

• ssh with dbus-send command, options: --no-proxy-server,--host-resolver-rules=MAP * 127.0.0.1\,EXCLUDE localhost,--testing-fixed-http-port=59219,--testing-fixed-https-port=59220, --remote-debugging-port=59221

Setup ssh forward tunnel for remote debug port

• -L47259:127.0.0.1:59221

Run measurement for page

• Measurement start/end will call smoothness_controller.py to start/stop browser trace

• Start/stop was send via websocket through debug port

• Browser trace will collected about 5 seconds

• inspector_backend will help to execute simple javascript to generate start/stop marker

When page run complete(in this case, 5s), compute the performance data

• fps

• Janks

17

Page 18: Telemetry indepth

Results: - Where is it? - What is the output? - How does the data generated? - How to interpret those data?

18

Page 19: Telemetry indepth

Where is it? What is the output?

In our example, the result simply displayed in stdout. You can specify –o option to send output to designate file

The output looks as following

19

page_name frame_times (ms)

jank (ms) mean_frame_time (ms)

mean_pixels_approximated (percent)

mostly_smooth (score)

http://mudcu.be/labs/JS1k/BreathingGalaxies.html

17.26396552 149.406 17.264 - -

http://runway.countlessprojects.com/prototype/performance_test.html

40.03739837 2732.9738 40.037 - -

http://ie.microsoft.com/testdrive/Performance/FishIETank/Default.html

16.6890301 34.4974 16.689 - -

http://ie.microsoft.com/testdrive/Performance/SpeedReading/Default.html

31.9674359 646.008 31.967 - -

Page 20: Telemetry indepth

How does the data generated?

Raw data send from Chrome browser via debug port

Processed by telemetry/telemetry/web_perf/metrics/smoothness.py

frame_times: arithmetic mean of frame_time sequence

Jank: discrepancy of frame_times sequence

mean_frame_time: arithmetic mean of frame_time sequence, round 3

mean_pixels_approximated: not available

mostly_smooth: if 95% frame time < 19ms (1000ms/60) then 1, otherwise 0

20

Page 21: Telemetry indepth

How to interpret those data?

frame_times/ mean_frame_time 1/FPS

jank smoothness

score quality

21

Page 22: Telemetry indepth

Tough Questions?

22

Page 23: Telemetry indepth

Questions

What is the overhead on *target* if using telemetry?

What is the memory consumption on *host*, over time?

Can a benchmark run with multiple measurement? Why?

How many metrics currently supported on ChromeOS? Do we try all of them?

WPR package contains multiple application data, how to update data? Update one application will affect others?