e xploiting c ontext in d ealing with p rogramming e rrors and e xceptions mohammad masudur rahman...

77
EXPLOITING CONTEXT IN DEALING WITH PROGRAMMING ERRORS AND EXCEPTIONS Mohammad Masudur Rahman Department of Computer Science University of Saskatchewan

Upload: eleanore-mcgee

Post on 25-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

EXPLOITING CONTEXT IN DEALING WITH PROGRAMMING ERRORS AND EXCEPTIONSMohammad Masudur Rahman

Department of Computer Science

University of Saskatchewan

2

Softw

are

Rese

arch

Lab, U

of S

Exception: a common experience!!

Exception triggering point

3

Softw

are

Rese

arch

Lab, U

of S

EXCEPTION DEBUGGING

Not a helpful message for understanding or solving the

exception

Web search!!

4

Softw

are

Rese

arch

Lab, U

of S

SOLVING EXCEPTION (STEP I: WEB SEARCH)

The browser does not know the context (i.e., details) of the exception.

Not much helpful ranking Hundreds of search results Forces the developer to switch back and forth

between IDE and browser. Trial and error in searching

Switching is often

distracting

5

Softw

are

Rese

arch

Lab, U

of S

SOLVING EXCEPTION (STEP II: QUERY SELECTION)

Selection of traditional search query

Switching to web browser for web search

This query may not be sufficient enough for most of the exceptions

6

SOLVING EXCEPTION (STEP III: POST-SEARCH ANALYSIS)

• Only the most relevant section needs to be checked to determine the relevance of a page.• Frequent switching between IDE and web browser for content analysis• Manual analysis of a number of web pages is non-trivial and time-consuming

1

2

3

4

7

SOLVING EXCEPTION (STEP IV: HANDLING EXCEPTIONS)

Only adds a generic handler (i.e., printStackTrace()) for the exception. Not much helpful for effective handling of the exception

8

Softw

are

Rese

arch

Lab, U

of S

OBSERVATIONS ON TRADITIONAL/AD-HOC APPROACH FOR EXCEPTION SOLVING

Step I: Web search may not be much effective or reliable

Step II: Support for search query formulation is not enough

Step III: Support for post-search analysis is insufficient

Step IV: Support for exception handling is not enough

9

DEALING WITH ERRORS & EXCEPTIONS:

PROPOSED APPROACH

10

CONTRIBUTIONS OF THIS THESIS

(1) SurfClipse (WCRE 2013, CSMR/WCRE 2014)

(2) QueryClipse (ICSME 2014)

(3) ContentSuggest

(4) SurfExample (SCAM 2014)

(5) ExcClipse

Web search

Query formulation

Post-search analysis

Exception handling

User study

11

Softw

are

Rese

arch

Lab, U

of S

TECHNICAL DETAILS & PROGRAMMING CONTEXT OF AN EXCEPTION

Programming context (i.e., context code)

Error message

Stack trace

1

2

3

12

SURFCLIPSE: IDE-BASED CONTEXT-AWARE META SEARCH ENGINE

13

Softw

are

Rese

arch

Lab, U

of S

MOTIVATIONAL EXPERIMENT

75 programming exceptions (details later) Individual engine can provide solutions for at

most 58 exceptions & each has some unique results.

Combination of content & context is always better than content only

Search Query

Common for All

Google Unique

Yahoo Unique

Bing Unique

Content Only 32 09 16 18

Content and Context

47 09 11 10

14

Softw

are

Rese

arch

Lab, U

of S

THE KEY IDEA !! META SEARCH ENGINE

Fig: Meta search engine

15

Softw

are

Rese

arch

Lab, U

of S

PROPOSED IDE-BASED META SEARCH MODEL

Start

Results

Web page

16

Softw

are

Rese

arch

Lab, U

of S

PROPOSED IDE-BASED META SEARCH MODEL

Distinguished Features (5) IDE-Based solution

Web search, search result and web browsing all from IDE No context-switching needed

Meta search engine Captures data from multiple search engines Also applies custom ranking techniques

Context-Aware search Uses stack traces information Uses context-code (surroundings of exception locations)

Software As A Service (SAAS) Search is provided as a web service, and can be

leveraged by an IDE. http://srlabg53-2.usask.ca/wssurfclipse/

17

Softw

are

Rese

arch

Lab, U

of S

PROPOSED IDE-BASED META SEARCH MODEL

Two Working Modes Proactive Mode

Auto-detects the occurrence of an exception Initiates search for exception by client itself Aligned with Cordeiro et al. (RSSE’ 2012) & Ponzanelli et

al. (ICSE 2013)

Interactive Mode Developer starts search using context menu Also facilitates keyword-based search Aligned with traditional web search within the IDE

18

Softw

are

Rese

arch

Lab, U

of S

PROPOSED METRICS & SCORES

Content Matching Score (Scms) Cosine similarity based measurement

Stack trace Matching Score (Sstm) Structural and lexical similarity measurement of

stack traces

Code context Matching Score (Sccx) Code snippet similarity (code clones)

StackOverflow Vote Score (Sso) Total votes for all posts in the SO result link

19

Softw

are

Rese

arch

Lab, U

of S

PROPOSED METRICS & SCORES

Site Traffic Rank Score (Sstr)-- Alexa and Compete Rank of each link

Search Engine weight (Ssew)---Relative reliability or importance of each search engine. Experiments with 75 programming queries against the search engines.

Heuristic weights of the metrics are determined through controlled experiments.

21

Softw

are

Rese

arch

Lab, U

of S

EXPERIMENT OVERVIEW

75 exceptions

Eclipse plug-in & Java development

Gold set solutions Peers

22

Softw

are

Rese

arch

Lab, U

of S

RESULTS ON DIFFERENT RANKING ASPECTSScore Components

Metrics

Proactive Mode (Top 30)

Interactive Mode (Top 30)

Content MPTEFR

0.037156 (75)74.66%

0.048165 (75)86.66%

Content +Context

MPTEFR

0.037655 (75)73.33%

0.051466 (75)88.00%

Content + Context + Popularity

MPTEFR

0.038156 (75)74.66%

0.051966 (75)88.00%

Content +Context + Popularity +Confidence

MPTEFR

0.038056 (75)74.66%

0.053868 (75)90.66%

[ MP = Mean Precision, R = Recall, TEF= Total Exceptions Fixed]

23

Softw

are

Rese

arch

Lab, U

of S

COMPARISON WITH EXISTING APPROACHES

Recommender Metrics Top 10 Top 20 Top 30

Cordeiro et al. (only stack traces)

MPTEFR

0.020215 (75)20.00%

0.012818 (75)24.00%

0.008518 (75)24.00%

Proposed Method (Proactive Mode)

MPTEFR

0.088651 (75)68.00%

0.052955 (75)73.33%

0.038056 (75)74.66%

Ponzanelli et al. (only context-code)

MPTEFR

0.02437 (37)18.92%

0.01357 (37)18.92%

0.00997 (37)18.92%

Proposed Method (Proactive Mode)

MPTEFR

0.100030 (37)81.08%

0.062132 (37)86.48%

0.045032 (37)86.48%

[ MP = Mean Precision, R = Recall, TEF= Total Exceptions Fixed]

24

Softw

are

Rese

arch

Lab, U

of S

COMPARISON WITH TRADITIONAL SEARCH ENGINESSearch Engine Metrics Top 10 Top 20 Top 30

Google MPTEFR

0.157157 (75)76.00%

0.086457 (75)76.00%

0.058057 (75)76.00%

Bing MPTEFR

0.101355 (75)73.33%

0.053358 (75)77.33%

0.036458 (75)77.33%

Yahoo! MPTEFR

0.098654 (75)72.00%

0.053957 (75)76.00%

0.036957 (75)76.00%

StackOverflow Search

MPTEFR

0.022614 (75)18.66%

0.014017 (75)22.66%

0.009717 (75)22.66%

Proposed Method (Interactive mode)

MPTEFR

0.122959 (75)78.66%

0.073664 (75)85.33%

0.053868 (75)90.66%

25

CONTRIBUTIONS OF THIS THESIS

(1) SurfClipse (WCRE 2013, CSMR/WCRE 2014)

(2) QueryClipse (ICSME 2014)

(3) ContentSuggest

(4) SurfExample (SCAM 2014)

(5) ExcClipse

Web search

Query formulation

Post-search analysis

Exception handling

User study

26

QUERYCLIPSE: CONTEXT-AWARE SEARCH QUERY RECOMMENDER

27

Softw

are

Rese

arch

Lab, U

of S

MOTIVATING EXAMPLE

Stack traceContext code

QueryClipse

21

Recommended search queries

28

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE QUERY RECOMMENDATION APPROACH

Distinguishing Features (4) Context-Aware Query

Exploits both stack trace and context code Extract search keywords carefully and systematically

Ranked List for Queries Ranked list based on keyword importance in queries Automatic suggestion through auto-completion

Custom Search Query Stack trace graph based on implied relationships Keyword importance based on network connectivity

Query Length Customization Number of query keywords customizable Search-friendly & easily applicable for any search

engines

29

CONTENTSUGGEST: CONTEXT-AWARE PAGE CONTENT RECOMMENDER

30

MOTIVATING EXAMPLE

1

23

Only the most relevant page section displayed Less information overhead, less effort required No need to browse the page for relevance checking

Softw

are

Rese

arch

Lab, U

of S

31

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE PAGE CONTENT SUGGESTION APPROACH

Distinguishing Features (3) Relevant section(s) suggestion

Analyzes both quality and relevance of the content Exploits stack trace & context code for relevance

checking Partial automation in post-search analysis

Less content, less overhead Need to analyze less content for page relevance

checking Displayed content more useful than meta description

Noise-free version of the page Removes advertisements, irrelevant widgets and so on Applies link-based heuristics Returns a noise-free version of the web page

32

CONTRIBUTIONS OF THIS THESIS

(1) SurfClipse (WCRE 2013, CSMR/WCRE 2014)

(2) QueryClipse (ICSME 2014)

(3) ContentSuggest

(4) SurfExample (SCAM 2014)

(5) ExcClipse

Web search

Query formulation

Post-search analysis

Exception handling

User study

33

SURFEXAMPLE: CONTEXT-AWARE CODE EXAMPLE RECOMMENDER FOR

EXCEPTION HANDLING

34

Softw

are

Rese

arch

Lab, U

of S

MOTIVATING EXAMPLE

Context code

Recommended code example

SurfExample

35

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE CODE EXAMPLE RECOMMENDER FOR EXCEPTION HANDLING

Distinguishing Features (3) Graph-based structural relevance

Static relationship and data dependency graph Graph structure matching

Handler quality Paradigm Novel idea to ensure quality of exception handlers Based on readability, amount & quality of the handler

actions Seamless integration of dataset

Exploits GitHub API for data collection Hundreds of popular and mature open source projects

from Eclipse, Apache and others

36

CONTRIBUTIONS OF THIS THESIS

(1) SurfClipse (WCRE 2013, CSMR/WCRE 2014)

(2) QueryClipse (ICSME 2014)

(3) ContentSuggest

(4) SurfExample (SCAM 2014)

(5) ExcClipse

Web search

Query formulation

Post-search analysis

Exception handling

User study

37

EXCCLIPSE: COMPARATIVE ANALYSIS BETWEEN PROPOSED APPROACHES

AND TRADITIONAL ONES

38

Softw

are

Rese

arch

Lab, U

of S

USER STUDY OVERVIEW

Six participants Four exceptions

Four tasks

39

Softw

are

Rese

arch

Lab, U

of S

USER STUDY OVERVIEW

Questionnaire Observation checklist

Training Execution

Evaluation

41

Softw

are

Rese

arch

Lab, U

of S

EVALUATION FEATURES

Tool Feature Functionality Notation

Support for query formulation Web search F1

Accuracy & effectiveness of results Web search F2

Post-search content analysis Web search F3

Support for query formulation Code search F4

Relevance & accuracy of results Code search F5

Usability Overall F6

Efficiency Overall F7

Visualization support Web search F8

Visualization support Code search F9

42

Softw

are

Rese

arch

Lab, U

of S

FEATURE-WISE RATINGS (WEB SEARCH)

43

Softw

are

Rese

arch

Lab, U

of S

FEATURE-WISE RATINGS (CODE SEARCH)

44

Softw

are

Rese

arch

Lab, U

of S

FEATURE-WISE RATINGS (NON-FUNCTIONAL)

45

Softw

are

Rese

arch

Lab, U

of S

OVERALL RATINGS

46

Softw

are

Rese

arch

Lab, U

of S

THREATS TO VALIDITY

SurfClipse--Search engines constantly evolving, same results may not be produced at later time.

QueryClipse-- Long query generation due to lengthy error message in the stack trace.

ContentSuggest-- Changes in the look and feel of the page due to removal of <style> and <script> tags

SurfExample-- Subjective bias in gold set development

ExcClipse--Limited number of participants

47

CONCLUSION

48

Softw

are

Rese

arch

Lab, U

of S

CONCLUDING REMARKS

SurfClipse – context-aware meta search engine IDE-based complete web search solution Outperforms two relevant existing approaches More recall than three search engines with

precision comparable to Google, the best performing engine.

QueryClipse– context-aware query recommender More effective than traditional queries and

queries by existing approaches Highly applicable in terms of pyramid score

49

Softw

are

Rese

arch

Lab, U

of S

CONCLUDING REMARKS

ContentSuggest– context-aware page content recommender Exploits exception details and recommends

relevant section(s) from the page Less information, less overhead for relevance

check Great potential for problem solving

SurfExample– code example recommender for exception handling Graph-based structural relevance matching Handler Quality paradigm Outperforms four existing approaches in all

metrics

50

Softw

are

Rese

arch

Lab, U

of S

FUTURE WORK

SurfClipse – DOM-based element extraction & topic modeling

QueryClipse– Semantic and customized query recommendation

ContentSuggest– More focused content recommendation (e.g., paragraph of interest) for problem solving

SurfExample– More directed support (e.g., applicability of an example) for exception handling

51

Softw

are

Rese

arch

Lab, U

of S

THANK YOU!!

52

Softw

are

Rese

arch

Lab, U

of S

REFERENCES[1] J. Cordeiro, B. Antunes, and P. Gomes. Context-based Recommendation to Support Problem Solving in

Software Development. In Proc. RSSE, pages 85 –89, June 2012.

[2] L. Ponzanelli, A. Bacchelli, and M. Lanza. Seahawk: Stack Overflow in the IDE. In Proc. ICSE, pages 1295–1298, 2013

[3] J. Brandt, P. J. Guo, J. Lewenstein, M. Dontcheva, and S. R. Klemmer. Two Studies of Opportunistic Programming: Interleaving Web Foraging, Learning, and Writing Code. In Proc. SIGCHI, pages 1589–1598, 2009.

[4] F. Sun, D. Song, and L. Liao. DOM Based Content Extraction via Text Density. In Proc. SIGIR, pages 245–254, 2011.

[5] T. Gottron. Content Code Blurring: A New Approach to Content Extraction. In Proc. DEXA, pages 29–33, 2008.

[6] S. Bajracharya, J. Ossher, and C. Lopes. Sourcerer: An Internet-Scale Software Repository. In Proc. SUITE, pages 1–4, 2009

[7] E. A. Barbosa, A. Garcia, and M. Mezini. Heuristic Strategies for Recommendation of Exception Handling Code. In Proc. SBES, pages 171–180, 2012

[8] R. Holmes and G. C. Murphy. Using Structural Context to Recommend Source Code Examples. In Proc. ICSE, pages 117–125, 2005

[9] W. Takuya and H. Masuhara. A Spontaneous Code Recommendation Tool Based on Associative Search. In Proc. SUITE, pages 17–20, 2011.

[10] M. M. Rahman, S. Yeasmin, and C. K. Roy. Towards a Context-Aware IDEBased Meta Search Engine for Recommendation about Programming Errors and Exceptions. In Proc. CSMR-WCRE, pages 194–203, 2014

[11] M. M. Rahman and C.K. Roy. On the Use of Context in Recommending Exception Handling Code Examples. In Proc. SCAM, 10 pp., 2014 (to appear)

[12] M. M. Rahman and C.K. Roy. SurfClipse: Context-Aware Meta Search in the IDE. In Proc. ICSME, 4 pp., 2014 (to appear)

[13] Slide 13, Meta Search Engine, http://en.wikipedia.org/wiki/Metasearch_engine

53

APPENDICES

54

MAPPING BETWEEN PROBLEM SOLVING STEPS & PROPOSED APPROACHES

Step II: Associated with QueryClipse

Step I: Associated with SurfClipse

Step III: Associated with ContentSuggest

Step IV: Associated with SurfExample

Proposed Approach

55

Softw

are

Rese

arch

Lab, U

of S

THREATS TO VALIDITY (SURFCLIPSE)

Search not real time yet, generally takes about 20-25 seconds per search. Multithreading used, extensive parallel processing needed.

Search engines constantly evolving, same results may not be produced at later time.

Experimented with common exceptions, which are widely discussed and available in the web.

56

Softw

are

Rese

arch

Lab, U

of S

TRADITIONAL SEARCH QUERIES

Popular queries by search engines may not be relevant all the time

Preparing a suitable query is non-trivial Trial and error approach in query

formulation Ad-hoc (e.g., error message) queries may

not reflect the context of the exception

57

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE QUERY RECOMMENDATION APPROACH

58

Softw

are

Rese

arch

Lab, U

of S

PROPOSED METRICS (3)

Trace Token Rank (TTR) Trace graph developed based on implied

relationship Calculated using Graph-based term-weighting An adaptation from Google’s PageRank algorithm

Degree of Interest (DOI) Heuristic proximity of token to the exception

location Associated with call references in the stack trace

Trace Token Frequency (TTF) Frequency of trace token in context code Associated with method call and object

instantiation

59

Softw

are

Rese

arch

Lab, U

of S

EXPERIMENT OVERVIEW

50 exceptions, their technical details and context code segments collected from our first study.

Recommended queries evaluated by searching with Google, Bing and Yahoo!

Queries compared with existing approaches Query ranks validated with experiments Applicability of query validated using a user

study Performance metrics– precision, recall, % of

exceptions solved

60

RESULTS ON DIFFERENT RANKING ASPECTS

Rank Aspects

Metrics

Google Bing Yahoo!

Top 10 Top 20 Top 10 Top 20 Top 10 Top 20

{DOI, TTR}

MAPKRPTCS

36.36%15.19%48.00%

36.36%15.19%48.00%

49.24%27.84%70.00%

49.24% 30.68% 76.00%

51.70% 34.09% 76.00%

51.61% 35.23% 78.00%

{TTR, TTF}

MAPKRPTCS

38.23%15.34%46.00%

38.23%15.34%46.00%

50.18%29.55%70.00%

50.09% 31.25% 74.00%

45.46% 30.68% 68.00%

44.60% 32.39% 70.00%

{DOI, TTF}

MAPKRPTCS

37.26%17.61%50.00%

37.26%17.61%50.00%

49.53%27.84%72.00%

48.23% 30.11% 74.00%

53.49% 30.68% 78.00%

51.35% 32.95% 78.00%

{DOI, TTR, TTF}

MAPKRPTCS

34.06%13.64%42.00%

34.06%13.64%42.00%

51.85%27.84%72.00%

50.44% 31.25% 76.00%

55.31% 31.82% 76.00%

53.40% 35.23% 80.00%

[ MP = Mean Average Precision at K, R = Recall, PTCS= % of Exceptions Solved]

61

Softw

are

Rese

arch

Lab, U

of S

COMPARISON WITH EXISTING APPROACHES

Approach Metrics

Google Bing Yahoo!

Top 10 Top 20 Top 10 Top 20 Top 10 Top 20

Traditional (Only error message)

MAPKRPTCS

38.97%19.88%52.00%

38.97%19.88%52.00%

44.11%24.43%58.00%

43.82% 26.14% 60.00%

43.18% 25.00% 56.00%

43.18 25.00% 56.00%

Cordeiro et al.

MAPKRPTCS

21.33%10.80%36.00%

21.17%11.93%38.00%

19.22%11.93%34.00%

19.22% 13.07% 36.00%

15.94% 10.80% 32.00%

16.60% 13.06% 40.00%

Ponzanelli et al.

MAPKRPTCS

14.36%9.09%24.00%

14.36%9.09%24.00%

30.27%12.50%38.00%

29.98% 13.07% 38.00%

28.12% 12.50% 38.00%

28.12% 12.50% 38.00%

Proposed approach

MAPKRPTCS

34.06%13.64%42.00%

34.06%13.64%42.00%

51.85%27.84%72.00%

50.44% 31.25% 76.00%

55.31% 31.82% 76.00%

53.40% 35.23% 80.00%

[ MP = Mean Average Precision at K, R = Recall, PTCS= % of Exceptions Solved]

62

Softw

are

Rese

arch

Lab, U

of S

FINDINGS FROM USER STUDY

Query No. 1 2 3 4 5 APS MAPS

PS (Rank I) 0.75 0.89 1.00 1.00 1.00 0.93

PS (Rank II) 0.67 0.72 0.93 1.00 0.63 0.79 0.84

PS (Rank III) 0.67 0.72 1.00 0.93 0.63 0.79

[ PS = Pyramid Score, APS = Average Pyramid Score, MAPS= Mean Average Pyramid Score]

63

Softw

are

Rese

arch

Lab, U

of S

THREATS TO VALIDITY

Long query generation due to lengthy error message in the stack trace.

Less user-friendly query due to complex program tokens– class name, method name in the trace information.

64

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE PAGE CONTENT SUGGESTION APPROACH

65

Softw

are

Rese

arch

Lab, U

of S

PROPOSED METRICS & SCORES (3)

Content Density (CTD) Text Density

Density of any textual content within a tag Link Density

Density of link-based content (i.e., <a>, <input>) Code Density

Density of code related content (i.e., <code>, <pre>)

Content Relevance (CTR) Text Relevance

Relevance of any textual content within a tag Code Relevance

Relevance of code related content within a tag

Content Score (CTS) Combines both Content Density and Content

Relevance

66

Softw

are

Rese

arch

Lab, U

of S

EXPERIMENT OVERVIEW

500 web pages,150 exceptions and their details (i.e., stack trace, context code) as dataset

40% of the pages from StackOverflow Q & A site

Evaluated against manually prepared gold sets

Evaluated for both relevant and noise-free content recommendation

Compared with four existing approaches Performance metrics– precision, recall,

F1-measure

67

Softw

are

Rese

arch

Lab, U

of S

RESULT ON DIFFERENT ASPECTS OF PAGE CONTENT (RELEVANT CONTENT SUGGESTION)

Content Aspect

Metrics

SO Pages

Non-SO Pages

All Pages

Content Density (CTD)

MPMRMF

50.91%91.74%62.32%

49.50%75.71%53.76%

50.07% 82.18% 57.22%

Content Relevance (CTR)

MPMRMF

86.63%52.17%61.07%

69.17%57.66%55.88%

76.23% 55.44% 57.98%

{CTD, CTR}

MPMRMF

89.91%74.90%80.07%

74.12%80.76%73.91%

80.50% 78.39%76.40%

[ MP = Mean Precision, MR = Mean Recall, MF= Mean F1-measure]

68

Softw

are

Rese

arch

Lab, U

of S

COMPARISON WITH EXISTING APPROACHESApproach Metrics SO Pages Non-SO Pages All Pages

Sun et al. MPMRMF

80.61%86.41%83.14%

78.70%75.67%75.48%

79.49% 80.14% 78.67%

ACCB [50] MPMRMF

90.65%77.32%83.07%

93.07%79.98%84.64%

92.06% 78.87% 83.99%

Proposed(noise-free content extraction)

MPMRMF

91.27%89.27%90.00%

88.90%86.20%85.76%

89.88% 87.48%87.53%

Sun et al. (adapted)

MPMRMF

52.63%86.49%62.57%

38.89%41.84%34.49%

44.44%59.88%45.84%

Proposed (relevant section suggestion)

MPMRMF

89.91%74.90%80.07%

74.12%80.76%73.91%

80.50%78.39%76.40%

[ MP = Mean Precision, MR = Mean Recall, MF= Mean F1-measure]

69

Softw

are

Rese

arch

Lab, U

of S

THREATS TO VALIDITY

Changes in the look and feel of the page due to removal of <style> and <script> tags

Lack of a fully-fledged user study

70

Softw

are

Rese

arch

Lab, U

of S

PROPOSED METRICS (3)

Structural Relevance (Rstr) API Object Match (AOM) Field Access Match (FAM) Method Invocation Match (MIM) Data Dependency Match (DDM)

Lexical Relevance (Rlex) Cosine Similarity Code Clone Measure

Quality of Exception Handler (Qehc) Readability (RA) Average Handler Actions (AHA) Handler to Code Ratio (HCR)

71

Softw

are

Rese

arch

Lab, U

of S

EXPERIMENT OVERVIEW

65 exceptions and context code segments 4400 code examples from 700+ repositories

of Eclipse, Apache, Facebook and Twitter Evaluated against manually prepared gold

set Compared with four existing approaches Performance metrics– precision, recall, # and

% of exceptions handled

72

Softw

are

Rese

arch

Lab, U

of S

RESULT ON DIFFERENT RANKING ASPECTSRanking Aspects Metrics Top 5 Top 10 Top 15

Structure (Rstr) MAPKRPEH

38.07%50.00%69.23%

33.84%61.93%75.38%

32.64%69.32%81.54%

Content (Rlex) MAPKR PEH

35.00%45.45%66.15%

33.85%63.63%75.38%

33.08%70.45%81.54%

{Structure (Rstr),Content (Rlex)}

MAPKRPEH

43.08%51.70%69.23%

38.69%66.48%75.38%

37.33%74.43%81.54%

{Structure (Rstr), Content (Rlex), Quality (Qehc)}

MAPKRPEH

41.92%57.39%73.85%

39.92%68.75%81.54%

38.64%76.70%86.15%

[ MAPK = Mean Average Precision at K, R = Recall, PEH= % of exceptions handled]

73

Softw

are

Rese

arch

Lab, U

of S

COMPARISON WITH EXISTING APPROACHESRecommender Metrics Top 5 Top 10 Top 15

Barbosa et al. MAPKRPEH

16.15%16.47%27.69%

14.69%25.57%38.46%

13.72%31.25%44.62%

Holmes & Murphy MAPKR PEH

4.62%11.36%24.62%

2.31%21.59%38.46%

2.31%27.84%47.69%

Takuya & Masuhara MAPKRPEH

21.54%15.34%33.85%

20.51%27.27%47.69%

19.74%30.68%47.69%

Bajracharya et al. MAPKRPEH

8.46%10.80%18.46%

7.95%15.91%27.69%

6.41%19.32%30.77%

Proposed approach MAPKRPEH

41.92%57.39%73.85%

39.92%68.75%81.54%

38.64%76.70%86.15%

[ MAPK = Mean Average Precision at K, R = Recall, PEH= % of exceptions handled]

74

Softw

are

Rese

arch

Lab, U

of S

THREATS TO VALIDITY

Subjective bias in gold set development Limited size of dynamic corpus for

recommendation Limited number of exceptions for

experiments

75

Softw

are

Rese

arch

Lab, U

of S

TRADITIONAL WEB SEARCH

No ties between IDE and web browsers

Does not consider problem-context

Environment-switching is distracting & time-consuming

Often not much productive (trial & error approach)

76

Softw

are

Rese

arch

Lab, U

of S

TRADITIONAL SUPPORT FOR POST-SEARCH CONTENT ANALYSIS

Keyword highlighting in title based on search query

Very limited meta description using keywords or phrases.

Page URL, Page visit statistics Little clues for actual content in the web page Forces one to browse the page

77

Softw

are

Rese

arch

Lab, U

of S

TRADITIONAL CODE SEARCH ENGINES No ties with the IDE Returns hundreds of

result pages Keyword matching search No support for search

query formulation

78

Softw

are

Rese

arch

Lab, U

of S

PROPOSED CONTEXT-AWARE CODE EXAMPLE RECOMMENDER FOR EXCEPTION HANDLING

79

USER STUDY GROUPING

P1,P2,P3,P4,P5,P6

A (P1,P4,P5) B (P2,P3,P6)

EC1, EC2, EC3, EC4

S1 (EC1, EC2)

S2 (EC3, EC4)

T/S1 + E/S2

T/S2 + E/S1

I (P1,P5) II (P4) I (P2) II (P3,P6)

TE ET TE ET