what do we know about the success of mdd? an update from...

37
What do we know about the success of MDD? An update from research Tijs van der Storm [email protected] / @tvdstorm

Upload: others

Post on 17-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

What do we know about the success of MDD?

An update from researchTijs van der Storm

[email protected] / @tvdstorm

Page 2: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

What I am working on:

http://www.rascal-mpl.org http://www.enso-lang.org

Page 3: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

MDD…

Page 4: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

MDD…

Page 5: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

What has research to say about the success of MDD?

Page 6: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

074 0 -74 5 9 /14 / $ 31. 0 0 © 2 014 I E E E MAY/JUNE 2014 | IEEE SOFTWARE 79

The State of Practice in Model-Driven EngineeringJon Whittle, John Hutchinson, and Mark Rouncefi eld, Lancaster University

// Despite lively debate over the past decade

on the benefi ts and drawbacks of model-

driven engineering (MDE), there have

been few industry-wide studies of MDE in

practice. A new study that surveyed 450

MDE practitioners and performed in-depth

interviews with 22 more suggests that

although MDE might be more widespread than

commonly believed, developers rarely use it

to generate whole systems. Rather, they apply

MDE to develop key parts of a system. //

IN 2001, the Object Management Group published the fi rst version of its model-driven architecture (MDA) specifi cation. MDA emphasized the role of models as primary artifacts in

software development and, in partic-ular, argued that models should be precise enough to support automated model transformations between life-cycle phases. This wasn’t a new idea,

of course, but it did lead to a resur-gence of activity in the area as well as hotly contested debates between proponents and detractors of model-driven approaches.1

Many years later, there remains a lack of clarity on whether model-driven engineering (MDE) is a good way to develop software (see the “What Is MDE, Anyway?” sidebar). Some companies have reported great success with it, whereas others have failed horribly. What’s missing is an industry-wide, independent study of MDE in practice, highlighting the factors that lead to success or fail-ure. Although there have been a few prior surveys of modeling in indus-try, they’ve focused on only one as-pect of modeling, such as the use of UML2 or formal models.3

In this article, we report on a new study of MDE practice that cov-ers a broad range of experiences. In particular, we focus on identifying MDE’s success and failure factors. We surveyed 450 MDE practitio-ners and interviewed 22 more from 17 different companies represent-ing 9 different industrial sectors (see the “Methods” sidebar for more in-formation on the particulars). The study refl ects a wide range of ma-turity levels with MDE: question-naire respondents were equally split among those in early exploration phases, those carrying out their fi rst MDE project, and those with many years’ experience with MDE. Inter-viewees were typically very experi-enced with MDE.

We discovered several surprises about the way that MDE is being used in industry, and we learned a lot about how companies can tip the odds in their favor when adopting it. Many of the lessons point to the fact that social and organizational

FEATURE: SOFTWARE DESIGN

s3whi.indd 79 4/4/14 2:29 PM

UML in Practice

Marian Petre Centre for Research in Computing

The Open University Milton Keynes, UK [email protected]

Abstract—UML has been described by some as “the lingua franca of software engineering”. Evidence from industry does not necessarily support such endorsements. How exactly is UML being used in industry – if it is? This paper presents a corpus of interviews with 50 professional software engineers in 50 companies and identifies 5 patterns of UML use.

Index Terms—UML, software development, software design, notation, empirical studies.

I. INTRODUCTION: ‘WHERE’S THE UML’? The Unified Modeling Language (UML) has been heralded

by many as “the lingua franca” (e.g., [23], [24]) or the “de facto standard” (e.g., Sjøberg, interviewed in [26], [6]) of software engineering. And yet there are others who argue that it is not fulfilling this role, because of issues such as size, complexity, semantics, consistency, and model transformation (e.g., [16], [8], [25]). Budgen et al. [6], in their systematic literature review of empirical evidence about UML, conclude that: “There is little to give confidence that the UML has really been evaluated as an artefact in its own right” and “There are few studies of adoption and use in the field” (p. 387). How exactly is UML being used in industry – is it, in practice, the universal notation that it is intended to be? This paper presents a corpus of interviews with professional software engineers about their use (or not) of UML.

Introduced in 1994, UML arose from the unification of three object-oriented design methods: the Booch Method [5], the Object Modelling Technique (OMT) [22], and the Objectory Method [15]. The UML standard was set and is managed by the Object Management Group (OMG). UML offers a framework to integrate many kinds of diagrams, but it also inherits many interpretations. If we treat ‘UML use’ as a variable, it would have to be continuous, not discrete. Some people use class diagrams; some scribble sequences on a whiteboard; some use UML for model-driven development. To someone arguing that UML is the ‘lingua franca’, any such use might be sufficient; but the different uses are not equivalent – they have different purposes and different consequences.

The issues associated with interpreting what it means to ‘use UML’ are familiar. One informant related a story about attending a workshop for software professionals in which the speaker was a UML exponent from IBM who asked how many people in the audience used UML. Of the 50 or so people in the audience, about 47 raised their hands. The IBM speaker understood this to mean that 47 people had adopted full use of

UML “with rigor” (as he later expressed to the informant). In contrast, the informant concluded that probably 45 of the 47 were like him: “selective borrowers” … “who use some of the principles sometimes”. The IBM speaker and the informant had very different models of what ‘using UML’ means in practice, with different implications.

Budgen et al. [6] point out that UML development has been guided more by expert opinion than by empirical evidence or cognitive theory. They call for “more and deeper studies of [UML’s] longer term use in the field” (p. 387). The work reported here is based on the notion that understanding the nature of actual UML use is important to the discipline, and that understanding how software professionals ‘use’ UML can inform the development of software design notations and tools.

The study reported in this paper has its origins in a discrepancy of experience. After conducting empirical studies of software design in industry for decades, the author found recently that some of her papers on design representation were challenged by academic referees who asked: “Where’s the UML?” Discussions at conferences such as ICSE and ESEC/FSE reinforced the discrepancy, with delegates surprised or even distrustful that the reported professional software design practice did not include use of UML. The response was predictable: to seek new evidence.

II. BACKGROUND: UML USE IN INDUSTRY For their systematic review, Budgen et al. [6] identified 49

papers published up to the end of 2008 that report empirical studies of UML. The majority of papers reported studies concerning UML metrics (12 papers), comprehension (14.5), and model quality (7.5) (with half values indicating papers addressing more than one focus); only 2 papers addressed adoption per se. They note a preponderance of laboratory experiments – and correspondingly little use of field studies. They identify deficiencies in the evidence base, noting that the reported experiments tend to have a single focus and make extensive use of student participants, and that the few reported studies in realistic settings used relatively simple forms of data collection. They concluded that: “There is therefore a real need for more and deeper studies of its longer term use in the field.” (p. 387) There are some case study accounts of experiences employing UML on substantial projects (e.g., [1], [2]), and there are a few surveys of UML use in industry, described below.

Similarly, Grossman et al. [12], in introducing their own web-based survey of UML adoption and use in the software

The State of the Art in Language WorkbenchesConclusions from the Language Workbench Challenge

Sebastian Erdweg1, Tijs van der Storm2,3, Markus Völter4, Meinte Boersma5, RemiBosman6, William R. Cook7, Albert Gerritsen6, Angelo Hulshout8, Steven Kelly9, AlexLoh7, Gabriël Konat10, Pedro J. Molina11, Martin Palatnik6, Risto Pohjonen9, Eugen

Schindler6, Klemens Schindler6, Riccardo Solmi12, Vlad Vergu10, Eelco Visser10,Kevin van der Vlist13, Guido Wachsmuth10, and Jimi van der Woning13

1 TU Darmstadt, Germany 2 CWI, Amsterdam, The Netherlands 3 INRIA Lille Nord Europe,Lille, France 4 voelter.de, Stuttgart, Germany 5 DSL Consultancy, Leiden, The Netherlands 6

Sioux, Eindhoven, The Netherlands 7 University of Texas, Austin, US 8 Delphino Consultancy,Best, The Netherlands 9 MetaCase, Jyväskylä, Finland 10 TU Delft, The Netherlands 11

Icinetic, Sevilla, Spain 12 Independent, Bologna, Italy 13 Universiteit van Amsterdam

Abstract. Language workbenches are tools that provide high-level mechanismsfor the implementation of (domain-specific) languages. Language workbenchesare an active area of research that also receives many contributions from industry.To compare and discuss existing language workbenches, the annual LanguageWorkbench Challenge was launched in 2011. Each year, participants are chal-lenged to realize a given domain-specific language with their workbenches as abasis for discussion and comparison. In this paper, we describe the state of the artof language workbenches as observed in the previous editions of the LanguageWorkbench Challenge. In particular, we capture the design space of language work-benches in a feature model and show where in this design space the participants ofthe 2013 Language Workbench Challenge reside. We compare these workbenchesbased on a DSL for questionnaires that was realized in all workbenches.

1 Introduction

Language workbenches, a term popularized by Martin Fowler in 2005 [19], are toolsthat support the efficient definition, reuse and composition of languages and their IDEs.Language workbenches make the development of new languages affordable and, there-fore, support a new quality of language engineering, where sets of syntactically andsemantically integrated languages can be built with comparably little effort. This canlead to multi-paradigm and language-oriented programming environments [8, 61] thatcan address important software engineering challenges.

Almost as long as programmers have built languages, they have also built tools tomake language development easier and language use more productive. The earliestlanguage workbench probably was SEM [52]; other early ones include MetaPlex [7],Metaview [51], QuickSpec [43], and MetaEdit [48]. Graphical workbenches that arestill being developed today include MetaEdit+ [28], DOME [24], and GME [38]. On

Industrial Adoption of Model-Driven

Engineering: Are the Tools Really the Problem?

Jon Whittle1, John Hutchinson1, Mark Rouncefield1, H̊akan Burden2, andRogardt Heldal2

1 School of Computing and Communications, Lancaster University, Lancaster, UK2 Computer Science and Engineering, Chalmers University of Technology and

University of Gothenburg, Gothenburg, Sweden

Abstract. An oft-cited reason for lack of adoption of model-driven en-gineering (MDE) is poor tool support. However, studies have shown thatadoption problems are as much to do with social and organizational fac-tors as with tooling issues. This paper discusses the impact of tools onMDE adoption and places tooling within a broader organizational con-text. The paper revisits previous data on MDE adoption (19 in-depthinterviews with MDE practitioners) and re-analyzes the data through thespecific lens of MDE tools. In addition, the paper presents new data (20new interviews in two specific companies) and analyzes it through thesame lens. The key contribution of the paper is a taxonomy of tool-relatedconsiderations, based on industry data, which can be used to reflect onthe tooling landscape as well as inform future research on MDE tools.

Keywords: model-driven engineering, modeling tools, organizational change

1 Introduction

When describing barriers to adoption of model-driven engineering (MDE), manyauthors point to inadequate MDE tools. Den Haan [1] highlights “insu�cienttools” as one of the eight reasons why MDE may fail. Kuhn et al. [2] identifyfive points of friction in MDE that introduce complexity; all relate to MDE tools.Staron [3] found that “technology maturity [may] not provide enough supportfor cost e�cient adoption of MDE.” Tomassetti et al.’s survey reveals that 30%of respondents see MDE tools as a barrier to adoption [4].

Clearly, then, MDE tools play a major part in the adoption (or not) of MDE.On the other hand, as shown by Hutchinson et al. [5, 6], barriers are as likelyto be social or organizational rather than purely technical or tool-related. Thequestion remains, then, to what extent poor tools hold back adoption of MDEand, in particular, what aspects – both organizational and technical – should beconsidered in the next generation of MDE tools.

The key contribution of this paper is a taxonomy of factors which capture howMDE tools impact MDE adoption. The focus is on relating tools and their tech-nical features to the broader social and organizational context in which they are

D. Gaševiü, R. Lämmel, and E. Van Wyk (Eds.): SLE 2008, LNCS 5452, pp. 16–34, 2009. © Springer-Verlag Berlin Heidelberg 2009

Evaluating the Visual Syntax of UML: An Analysis of the Cognitive Effectiveness of the UML Family

of Diagrams

Daniel Moody and Jos van Hillegersberg

Department of Information Systems & Change Management University of Twente, Enschede, Netherlands

[email protected]

Abstract. UML is a visual language. However surprisingly, there has been very little attention in either research or practice to the visual notations used in UML. Both academic analyses and official revisions to the standard have focused al-most exclusively on semantic issues, with little debate about the visual syntax. We believe this is a major oversight and that as a result, UML’s visual devel-opment is lagging behind its semantic development. The lack of attention to visual aspects is surprising given that the form of visual representations is known to have an equal if not greater effect on understanding and problem solv-ing performance than their content. The UML visual notations were developed in a bottom-up manner, by reusing and synthesising existing notations, with choice of graphical conventions based on expert consensus. We argue that this is an inappropriate basis for making visual representation decisions and they should be based on theory and empirical evidence about cognitive effectiveness. This paper evaluates the visual syntax of UML using a set of evidence-based principles for designing cognitively effective visual notations. The analysis re-veals some serious design flaws in the UML visual notations together with practical recommendations for fixing them.

1 Introduction

The Unified Modelling Language (UML) is widely accepted as an industry standard language for modelling software systems. The history of software engineering is char-acterised by competing concepts, notations and methodologies. UML has provided the software industry with a common language, something which it has never had before. Its development represents a pivotal event in the history of software engineering, which has helped to unify the field and provide a basis for further standardisation.

1.1 UML: A Visual Language

UML is a “visual language for visualising, specifying, constructing and documenting software intensive systems” [25]. The UML visual vocabulary (symbol set) is loosely partitioned into 13 diagram types, which define overlapping views on the underlying metamodel (Figure 1). So far, there has been remarkably little debate in research or

COMMUNICATIONS OF THE ACM June 1995/Vol. 38, No. 6 33

Why Looking Isn’t Always Seeing:Readership Skills and Graphical Programming

M A R I A N P E T R E

“A picture is worth a thousand words’’—isn’t it?And hence graphical representation is by its natureuniversally superior to text—isn’t it? Why then isn’tthe anecdote itself expressed graphically? Perhapsanecdotes don’t lend themselves to purely graphicalpresentation. Perhaps this phrase is too simplistic tobe appropriate in the context of graphical notations.Nevertheless, many writers on visual programmingargue in just this way: graphical representations arebetter simply because they are graphical (e.g., [22]).

This article argues otherwise: that text and graph-

Many believe that visual programming techniques are quite close todevelopers. This article reports on some fascinating research focusing onunderstanding how textual and visual representations for software differin effectiveness. Among other things, it is determined that the differenceslie not so much in the textual-visual distinction as in the degree to whichspecific representations support the conventions experts expect.

ics are not necessarily an equivalent exchange, andthat we still don’t fully understand ‘what’s good aboutgraphics’. This is not an argument for a ‘textist oppo-sition’, but rather a call for balance and considera-tion. Both graphics and text have their uses—andtheir limitations. Pictorial and graphic media cancarry considerable information in what may be a con-venient and attractive form, but incorporating graph-ics into programming notations requires us tounderstand the precise contribution that graphicalrepresentations might make to the job at hand.

Page 7: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

074 0 -74 5 9 /14 / $ 31. 0 0 © 2 014 I E E E MAY/JUNE 2014 | IEEE SOFTWARE 79

The State of Practice in Model-Driven EngineeringJon Whittle, John Hutchinson, and Mark Rouncefi eld, Lancaster University

// Despite lively debate over the past decade

on the benefi ts and drawbacks of model-

driven engineering (MDE), there have

been few industry-wide studies of MDE in

practice. A new study that surveyed 450

MDE practitioners and performed in-depth

interviews with 22 more suggests that

although MDE might be more widespread than

commonly believed, developers rarely use it

to generate whole systems. Rather, they apply

MDE to develop key parts of a system. //

IN 2001, the Object Management Group published the fi rst version of its model-driven architecture (MDA) specifi cation. MDA emphasized the role of models as primary artifacts in

software development and, in partic-ular, argued that models should be precise enough to support automated model transformations between life-cycle phases. This wasn’t a new idea,

of course, but it did lead to a resur-gence of activity in the area as well as hotly contested debates between proponents and detractors of model-driven approaches.1

Many years later, there remains a lack of clarity on whether model-driven engineering (MDE) is a good way to develop software (see the “What Is MDE, Anyway?” sidebar). Some companies have reported great success with it, whereas others have failed horribly. What’s missing is an industry-wide, independent study of MDE in practice, highlighting the factors that lead to success or fail-ure. Although there have been a few prior surveys of modeling in indus-try, they’ve focused on only one as-pect of modeling, such as the use of UML2 or formal models.3

In this article, we report on a new study of MDE practice that cov-ers a broad range of experiences. In particular, we focus on identifying MDE’s success and failure factors. We surveyed 450 MDE practitio-ners and interviewed 22 more from 17 different companies represent-ing 9 different industrial sectors (see the “Methods” sidebar for more in-formation on the particulars). The study refl ects a wide range of ma-turity levels with MDE: question-naire respondents were equally split among those in early exploration phases, those carrying out their fi rst MDE project, and those with many years’ experience with MDE. Inter-viewees were typically very experi-enced with MDE.

We discovered several surprises about the way that MDE is being used in industry, and we learned a lot about how companies can tip the odds in their favor when adopting it. Many of the lessons point to the fact that social and organizational

FEATURE: SOFTWARE DESIGN

s3whi.indd 79 4/4/14 2:29 PM

IEEE Software, May/June 2014

• 450 practitioners surveyed

• 22 additional interviews

• 17 companies

• 9 industrial sectors

Page 8: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Small, domain-specific languages (DSLs)

vs

Page 9: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Parts, not wholes

vs

Page 10: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Formal documentation

Page 11: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Notations

Page 12: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

COMMUNICATIONS OF THE ACM June 1995/Vol. 38, No. 6 33

Why Looking Isn’t Always Seeing:Readership Skills and Graphical Programming

M A R I A N P E T R E

“A picture is worth a thousand words’’—isn’t it?And hence graphical representation is by its natureuniversally superior to text—isn’t it? Why then isn’tthe anecdote itself expressed graphically? Perhapsanecdotes don’t lend themselves to purely graphicalpresentation. Perhaps this phrase is too simplistic tobe appropriate in the context of graphical notations.Nevertheless, many writers on visual programmingargue in just this way: graphical representations arebetter simply because they are graphical (e.g., [22]).

This article argues otherwise: that text and graph-

Many believe that visual programming techniques are quite close todevelopers. This article reports on some fascinating research focusing onunderstanding how textual and visual representations for software differin effectiveness. Among other things, it is determined that the differenceslie not so much in the textual-visual distinction as in the degree to whichspecific representations support the conventions experts expect.

ics are not necessarily an equivalent exchange, andthat we still don’t fully understand ‘what’s good aboutgraphics’. This is not an argument for a ‘textist oppo-sition’, but rather a call for balance and considera-tion. Both graphics and text have their uses—andtheir limitations. Pictorial and graphic media cancarry considerable information in what may be a con-venient and attractive form, but incorporating graph-ics into programming notations requires us tounderstand the precise contribution that graphicalrepresentations might make to the job at hand.

In: Communications of the ACM, June 1995/Vol. 38, No. 6In considering repre-sentations for program-ming, the concern isformalisms, not art—pre-cision, not breadth ofinterpretation. Theimplicit model behind atleast some of the claimsthat graphical representa-tions are superior to tex-

tual ones is that the programmer takes in a program inthe same way that a viewer takes in a painting: by stand-ing in front of it and soaking it in, letting the eye wan-der from place to place, receiving a ‘gestalt’impression of the whole. But one purpose of programsis to present information clearly and unambiguously.Effective use requires purposeful perusal, not theunfettered, wandering eye of the casual art viewer. Theaim is not poetic interpretation, but reliable interpre-tation. The question is not ‘Is a picture worth a thou-sand words?’, but ‘Does a given picture convey thesame thousand words to all viewers?’ (see Figure 1).

A programmer is more like the reader of a techni-cal manual than the viewer of a painting: a deliberatereader, goal-directed and hypothesis-driven. Somestudies of reading clearly show that accomplishedreaders, reading for comprehension, are deliberatereaders, making great use of the typographic andsemantic cues found in well-presented text (see [2]).To support them in this activity, typographers haveevolved ways—graphical enhancements—to makerequired information quickly accessible (programcomprehension is analyzed in this style in [16]).

The programmer uses a programming notationwith specific tasks or goals in mind, tasks that maywell be complex and heterogeneous. The success of arepresentation, graphical or textual, depends onwhether it makes accessible the particular informa-tion the user needs—and on how well it copes with

the different information requirements of the user’svarious tasks.

Graphical representations are more challengingthan they appear at first. This article refers toresearch results to consider why the attractions ofgraphical representations are not matched by perfor-mance, putting forth the arguments:

• that much of what contributes to the comprehensi-bility of a graphical representation isn’t part of theformal programming notation but a ‘secondarynotation’ of layout, typographic cues, and graphi-cal enhancements that is subject to individual skill;

• that graphical readership is an acquired skill:structure, relationships, and relevance aren’t uni-versally obvious;

• that experts ‘see’ differently and use differentstrategies from novice graphical programmers;

• that, although some of their touted qualities maybe illusory, graphical representations are neverthe-less persistently appealing and that this appeal mayhave its own value;

• that the role of graphics in notation must beaddressed realistically, rather than simplistically.

This article discusses these observations about howprogrammers actually use different representationsand challenges the naive assumption that graphicalrepresentations are unproblematically more ‘trans-parent’—more accessible, comprehensible, andmemorable—than textual ones. It suggests insteadthat no single representation is a panacea, but that weneed to identify appropriate criteria for choosingrepresentational ‘horses’ for cognitive ‘courses’.

This argument focuses on the use of graphics innotation, specifically in graphical programming lan-guages. The issues addressed generalize to other con-texts, such as computer interfaces and environments,where information must be presented precisely.

34 June 1995/Vol. 38, No. 6 COMMUNICATIONS OF THE ACM

Figure 1.In notation: does agiven picture con-vey the same thou-sand words to allviewers?

• Controlled comprehension experiments

Page 13: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Graphical vs textual

• Graphics slower than text in comprehension

• Graphic readership is an acquired skill

• Graphics depends on secondary notation (layout)

• Graphics is more free: easier to go wrong

Page 14: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

D. Gaševiü, R. Lämmel, and E. Van Wyk (Eds.): SLE 2008, LNCS 5452, pp. 16–34, 2009. © Springer-Verlag Berlin Heidelberg 2009

Evaluating the Visual Syntax of UML: An Analysis of the Cognitive Effectiveness of the UML Family

of Diagrams

Daniel Moody and Jos van Hillegersberg

Department of Information Systems & Change Management University of Twente, Enschede, Netherlands

[email protected]

Abstract. UML is a visual language. However surprisingly, there has been very little attention in either research or practice to the visual notations used in UML. Both academic analyses and official revisions to the standard have focused al-most exclusively on semantic issues, with little debate about the visual syntax. We believe this is a major oversight and that as a result, UML’s visual devel-opment is lagging behind its semantic development. The lack of attention to visual aspects is surprising given that the form of visual representations is known to have an equal if not greater effect on understanding and problem solv-ing performance than their content. The UML visual notations were developed in a bottom-up manner, by reusing and synthesising existing notations, with choice of graphical conventions based on expert consensus. We argue that this is an inappropriate basis for making visual representation decisions and they should be based on theory and empirical evidence about cognitive effectiveness. This paper evaluates the visual syntax of UML using a set of evidence-based principles for designing cognitively effective visual notations. The analysis re-veals some serious design flaws in the UML visual notations together with practical recommendations for fixing them.

1 Introduction

The Unified Modelling Language (UML) is widely accepted as an industry standard language for modelling software systems. The history of software engineering is char-acterised by competing concepts, notations and methodologies. UML has provided the software industry with a common language, something which it has never had before. Its development represents a pivotal event in the history of software engineering, which has helped to unify the field and provide a basis for further standardisation.

1.1 UML: A Visual Language

UML is a “visual language for visualising, specifying, constructing and documenting software intensive systems” [25]. The UML visual vocabulary (symbol set) is loosely partitioned into 13 diagram types, which define overlapping views on the underlying metamodel (Figure 1). So far, there has been remarkably little debate in research or

terminology, a theory for design and action (Type V) (Fig. 5).Defining explicit principles transforms visual notation designfrom an unselfconscious process (craft) into a selfconsciousprocess (design discipline).

The principles were developed using a best evidencesynthesis approach: They were synthesized from theoryand empirical evidence about cognitive effectiveness ofvisual representations. This differs from approaches like[29], which rely on codifying craft knowledge or the CDsframework [44], which uses a combination of craft andscientific knowledge. The resulting principles are summar-ized in Fig. 9 (as a theory about visual representationshould have a visual representation!). The modular struc-ture makes it easy to add or remove principles, emphasiz-ing that the principles are not fixed or immutable but can bemodified or extended by future research. Each principle isdefined by:

. Name: All principles are named in a positive sense,and represent desirable properties of notations. Thismeans that they can be used as both evaluationcriteria and design goals.

. Semantic (theoretical) definition: A one-sentencestatement of what it means (included in the headingfor each principle). In keeping with the prescriptivenature of a design theory, these take the form ofimperative statements or “shoulds” [46].

. Operational (empirical) definition: Evaluation pro-cedures and/or metrics are defined for mostprinciples.

. Design strategies: Ways of improving notations withrespect to the principle.

. Exemplars and counterexemplars: Examples ofnotations that satisfy the principle (design excel-lence) and ones that violate it (design mediocrity).These are drawn from SE as well as other fields.

References to principles in the following text areindicated by underlining.

4.1 Principle of Semiotic Clarity: There Should Bea 1:1 Correspondence between SemanticConstructs and Graphical Symbols

According to Goodman’s theory of symbols [37], for anotation to satisfy the requirements of a notational

system, there must be a one-to-one correspondencebetween symbols and their referent concepts. Naturallanguages are not notational systems as they containsynonyms and homonyms but many artificial languages(e.g., musical notation and mathematical notation) are. Therequirements of a notational system constrain the allowableexpressions in a language to maximize precision, expres-siveness, and parsimony, which are desirable design goalsfor SE notations. When there is not a one-to-one correspon-dence between constructs and symbols, one or more of thefollowing anomalies can occur (using similar terms to thoseused in ontological analysis) (Fig. 10):

. Symbol redundancy occurs when multiple graphi-cal symbols can be used to represent the samesemantic construct.

. Symbol overload occurs when two different con-structs can be represented by the same graphicalsymbol.

. Symbol excess occurs when graphical symbols donot correspond to any semantic construct.

. Symbol deficit occurs when there are semanticconstructs that are not represented by any graphicalsymbol.

This principle represents an extension of ontologicalanalysis to the level of visual syntax, though its theoreticalgrounding is in semiotics rather than ontology.

4.1.1 Symbol RedundancyInstances of symbol redundancy are called synographs (theequivalent of synonyms in textual languages). Symbolredundancy places a burden of choice on the notation userto decide which symbol to use and on the reader toremember multiple representations of the same construct.There are many instances of symbol redundancy in UML:Fig. 11 shows an example of a construct synograph and arelationship synograph.

762 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 35, NO. 6, NOVEMBER/DECEMBER 2009

Fig. 9. Principles for designing cognitively effective visual notations: Themodular (“honeycomb”) structure supports modifications and extensionsto the principles.

Fig. 10. Semiotic clarity: There should be a 1:1 correspondence betweensemantic constructs and graphical symbols.

Fig. 11. Symbol redundancy (synographs) in UML: There are alternativegraphical symbols for (a) interfaces on class diagrams and (b) packagerelationships on Package Diagrams.

22 D. Moody and J. van Hillegersberg

3.4 Principle of Visual Expressiveness

Visual expressiveness refers to the number of different visual variables used in a vis-ual notation. There are 8 elementary visual variables which can be used to graphically encode information (Figure 4) [1]. These are categorised into planar variables (the two spatial dimensions) and retinal variables (features of the retinal image).

Fig. 4. The Dimensions of the Design Space: the visual variables define a set of elementary graphical techniques that can be used to construct visual notations

Using a range of variables results in a perceptually enriched representation which uses multiple, parallel channels of communication. This maximises computational off-loading and supports full utilisation of the graphic design space. Different visual vari-ables have properties which make them suitable for encoding some types of information but not others. Knowledge of these properties is necessary to make effective choices.

3.5 Principle of Graphic Parsimony

Graphic complexity is defined as the number of distinct graphical conventions used in a notation: the size of its visual vocabulary [23]. Empirical studies show that increas-ing graphic complexity significantly reduces understanding of software engineering diagrams by naïve users [23]. It is also a major barrier to learning and use of a nota-tion. The human ability to discriminate between perceptually distinct alternatives on a single perceptual dimension (span of absolute judgement) is around six categories: this defines a practical limit for graphic complexity [18].

4 Evaluation of UML

In this section, we evaluate the 13 types of diagrams defined in UML 2.0 using the principles defined in Section 3.

4.1 Principle of Semiotic Clarity

The UML visual vocabulary contains many violations to semiotic clarity: in particu-lar, it has alarmingly high levels of symbol redundancy and symbol overload. For example, of the 31 symbols commonly used on Class diagrams (shown in Figure 6

in: Software Language Engineering, LNCS 5452, pp. 16–34, 2009.

Page 15: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

UML as visual notation• Symbol redundancy

(n symbols ~ 1 meaning)

• Symbol overload (1 symbol ~ n meanings)

• Symbol excess (“boxitis”)

• Visual proximity(links only differ in shape and brightness)

*

* “rather not”

Page 16: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

UML in Practice

Marian Petre Centre for Research in Computing

The Open University Milton Keynes, UK [email protected]

Abstract—UML has been described by some as “the lingua franca of software engineering”. Evidence from industry does not necessarily support such endorsements. How exactly is UML being used in industry – if it is? This paper presents a corpus of interviews with 50 professional software engineers in 50 companies and identifies 5 patterns of UML use.

Index Terms—UML, software development, software design, notation, empirical studies.

I. INTRODUCTION: ‘WHERE’S THE UML’? The Unified Modeling Language (UML) has been heralded

by many as “the lingua franca” (e.g., [23], [24]) or the “de facto standard” (e.g., Sjøberg, interviewed in [26], [6]) of software engineering. And yet there are others who argue that it is not fulfilling this role, because of issues such as size, complexity, semantics, consistency, and model transformation (e.g., [16], [8], [25]). Budgen et al. [6], in their systematic literature review of empirical evidence about UML, conclude that: “There is little to give confidence that the UML has really been evaluated as an artefact in its own right” and “There are few studies of adoption and use in the field” (p. 387). How exactly is UML being used in industry – is it, in practice, the universal notation that it is intended to be? This paper presents a corpus of interviews with professional software engineers about their use (or not) of UML.

Introduced in 1994, UML arose from the unification of three object-oriented design methods: the Booch Method [5], the Object Modelling Technique (OMT) [22], and the Objectory Method [15]. The UML standard was set and is managed by the Object Management Group (OMG). UML offers a framework to integrate many kinds of diagrams, but it also inherits many interpretations. If we treat ‘UML use’ as a variable, it would have to be continuous, not discrete. Some people use class diagrams; some scribble sequences on a whiteboard; some use UML for model-driven development. To someone arguing that UML is the ‘lingua franca’, any such use might be sufficient; but the different uses are not equivalent – they have different purposes and different consequences.

The issues associated with interpreting what it means to ‘use UML’ are familiar. One informant related a story about attending a workshop for software professionals in which the speaker was a UML exponent from IBM who asked how many people in the audience used UML. Of the 50 or so people in the audience, about 47 raised their hands. The IBM speaker understood this to mean that 47 people had adopted full use of

UML “with rigor” (as he later expressed to the informant). In contrast, the informant concluded that probably 45 of the 47 were like him: “selective borrowers” … “who use some of the principles sometimes”. The IBM speaker and the informant had very different models of what ‘using UML’ means in practice, with different implications.

Budgen et al. [6] point out that UML development has been guided more by expert opinion than by empirical evidence or cognitive theory. They call for “more and deeper studies of [UML’s] longer term use in the field” (p. 387). The work reported here is based on the notion that understanding the nature of actual UML use is important to the discipline, and that understanding how software professionals ‘use’ UML can inform the development of software design notations and tools.

The study reported in this paper has its origins in a discrepancy of experience. After conducting empirical studies of software design in industry for decades, the author found recently that some of her papers on design representation were challenged by academic referees who asked: “Where’s the UML?” Discussions at conferences such as ICSE and ESEC/FSE reinforced the discrepancy, with delegates surprised or even distrustful that the reported professional software design practice did not include use of UML. The response was predictable: to seek new evidence.

II. BACKGROUND: UML USE IN INDUSTRY For their systematic review, Budgen et al. [6] identified 49

papers published up to the end of 2008 that report empirical studies of UML. The majority of papers reported studies concerning UML metrics (12 papers), comprehension (14.5), and model quality (7.5) (with half values indicating papers addressing more than one focus); only 2 papers addressed adoption per se. They note a preponderance of laboratory experiments – and correspondingly little use of field studies. They identify deficiencies in the evidence base, noting that the reported experiments tend to have a single focus and make extensive use of student participants, and that the few reported studies in realistic settings used relatively simple forms of data collection. They concluded that: “There is therefore a real need for more and deeper studies of its longer term use in the field.” (p. 387) There are some case study accounts of experiences employing UML on substantial projects (e.g., [1], [2]), and there are a few surveys of UML use in industry, described below.

Similarly, Grossman et al. [12], in introducing their own web-based survey of UML adoption and use in the software

In: 35th International Conference on Software Engineering (ICSE’13), pp. 722-731

• Interviews with:

• 50 developers

• 50 companies

Page 17: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

UML in practice

• No UML: 38

• Code generation: 3

• Selective: 11

• Whole-hearted: 0

Page 18: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Softw Syst Model (2012) 11:571–580DOI 10.1007/s10270-012-0278-4

EXPERT’S VOICE

How effective is UML modeling ?An empirical perspective on costs and benefits

Michel R. V. Chaudron · Werner Heijstek ·Ariadi Nugroho

Received: 23 October 2011 / Revised: 23 July 2012 / Accepted: 27 July 2012 / Published online: 26 August 2012© Springer-Verlag 2012

Abstract Modeling has become a common practice inmodern software engineering. Since the mid 1990s theUnified Modeling Language (UML) has become the de factostandard for modeling software systems. The UML is used inall phases of software development: ranging from the require-ment phase to the maintenance phase. However, empiricalevidence regarding the effectiveness of modeling in softwaredevelopment is few and far apart. This paper aims to synthe-size empirical evidence regarding the effectiveness of mod-eling using UML in software development, with a specialfocus on the cost and benefits.

Keywords Unified Modeling Language ·Costs and benefits · Quality · Productivity · Effectiveness

1 Introduction

Over the past decade modeling has become a common activ-ity in software development projects. There is however, verylittle evidence about the effectiveness of using UML. As aresult of this lack of scientific knowledge, debates continueabout whether or not modeling helps to improve softwaredevelopment in practice.

Communicated by Prof. Jon Whittle and Prof. Gregor Engels.

M. R. V. Chaudron · W. Heijstek (B)Leiden Institute of Advanced Computer Science, Leiden University,Niels Bohrweg 1, 2333 CA Leiden, The Netherlandse-mail: [email protected]

A. NugrohoSoftware Improvement Group, Amstelplein 1, 1096 HAAmsterdam, The Netherlandse-mail: [email protected]

In this paper, we present a selection of empirical evidenceabout the effectiveness of UML modeling in practice. Wedefine effectiveness as the combination of positive (benefits)and negative (costs) effects on overall project productivityand quality. To this end, we also review what is known aboutthe manner in which projects use UML. In particular, thispaper will discuss:

– Empirical evidence about the effectiveness of UML mod-eling in software development, focusing on a costs andbenefits perspective, and on industrial practice.

– Gaps identified in the research concerning effective UMLmodeling.

2 The practice of software modeling

The state of the practice of UML modeling has been surveyedby several papers since 2006: Grossman [10], Dobing [6], andLange [19]. These papers survey the types of diagrams used,the industries in which modeling is used and try to relatethis to other factors such as size of the project and manage-rial involvement in UML use. The main findings from thesesurveys are that three types of diagrams are most commonlyused: use case diagrams, class diagrams and sequence dia-grams. In embedded system projects, behavioural diagrams(sequence, state-charts and collaboration diagrams) are morecommon than in information system type of projects. A sur-vey with a slightly different focus, yet very interesting inits findings, was performed by Forward and Lethbridge [8].Their focus is on the perception of software professionals onthe pro’s and con’s of modeling. In addition to these surveys,the authors of this paper have been visiting 20+ softwaredevelopment projects that use UML over the last 10 years.In the remainder of this paper, we present an empirical

123

• UML is used successfully, but

• an important obstacle is inadequate tools

Page 19: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,
Page 20: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Industrial Adoption of Model-Driven

Engineering: Are the Tools Really the Problem?

Jon Whittle1, John Hutchinson1, Mark Rouncefield1, H̊akan Burden2, andRogardt Heldal2

1 School of Computing and Communications, Lancaster University, Lancaster, UK2 Computer Science and Engineering, Chalmers University of Technology and

University of Gothenburg, Gothenburg, Sweden

Abstract. An oft-cited reason for lack of adoption of model-driven en-gineering (MDE) is poor tool support. However, studies have shown thatadoption problems are as much to do with social and organizational fac-tors as with tooling issues. This paper discusses the impact of tools onMDE adoption and places tooling within a broader organizational con-text. The paper revisits previous data on MDE adoption (19 in-depthinterviews with MDE practitioners) and re-analyzes the data through thespecific lens of MDE tools. In addition, the paper presents new data (20new interviews in two specific companies) and analyzes it through thesame lens. The key contribution of the paper is a taxonomy of tool-relatedconsiderations, based on industry data, which can be used to reflect onthe tooling landscape as well as inform future research on MDE tools.

Keywords: model-driven engineering, modeling tools, organizational change

1 Introduction

When describing barriers to adoption of model-driven engineering (MDE), manyauthors point to inadequate MDE tools. Den Haan [1] highlights “insu�cienttools” as one of the eight reasons why MDE may fail. Kuhn et al. [2] identifyfive points of friction in MDE that introduce complexity; all relate to MDE tools.Staron [3] found that “technology maturity [may] not provide enough supportfor cost e�cient adoption of MDE.” Tomassetti et al.’s survey reveals that 30%of respondents see MDE tools as a barrier to adoption [4].

Clearly, then, MDE tools play a major part in the adoption (or not) of MDE.On the other hand, as shown by Hutchinson et al. [5, 6], barriers are as likelyto be social or organizational rather than purely technical or tool-related. Thequestion remains, then, to what extent poor tools hold back adoption of MDEand, in particular, what aspects – both organizational and technical – should beconsidered in the next generation of MDE tools.

The key contribution of this paper is a taxonomy of factors which capture howMDE tools impact MDE adoption. The focus is on relating tools and their tech-nical features to the broader social and organizational context in which they are

• 20 companies

• 39 professionals

• 100s of data points

• 300,000 words of transcribed data

Page 21: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Are tools really the problem?

• Successful MDD => heavy tailored tools

• (or even home grown)

• But: tools hard to integrate with process and culture

Page 22: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

The State of the Art in Language WorkbenchesConclusions from the Language Workbench Challenge

Sebastian Erdweg1, Tijs van der Storm2,3, Markus Völter4, Meinte Boersma5, RemiBosman6, William R. Cook7, Albert Gerritsen6, Angelo Hulshout8, Steven Kelly9, AlexLoh7, Gabriël Konat10, Pedro J. Molina11, Martin Palatnik6, Risto Pohjonen9, Eugen

Schindler6, Klemens Schindler6, Riccardo Solmi12, Vlad Vergu10, Eelco Visser10,Kevin van der Vlist13, Guido Wachsmuth10, and Jimi van der Woning13

1 TU Darmstadt, Germany 2 CWI, Amsterdam, The Netherlands 3 INRIA Lille Nord Europe,Lille, France 4 voelter.de, Stuttgart, Germany 5 DSL Consultancy, Leiden, The Netherlands 6

Sioux, Eindhoven, The Netherlands 7 University of Texas, Austin, US 8 Delphino Consultancy,Best, The Netherlands 9 MetaCase, Jyväskylä, Finland 10 TU Delft, The Netherlands 11

Icinetic, Sevilla, Spain 12 Independent, Bologna, Italy 13 Universiteit van Amsterdam

Abstract. Language workbenches are tools that provide high-level mechanismsfor the implementation of (domain-specific) languages. Language workbenchesare an active area of research that also receives many contributions from industry.To compare and discuss existing language workbenches, the annual LanguageWorkbench Challenge was launched in 2011. Each year, participants are chal-lenged to realize a given domain-specific language with their workbenches as abasis for discussion and comparison. In this paper, we describe the state of the artof language workbenches as observed in the previous editions of the LanguageWorkbench Challenge. In particular, we capture the design space of language work-benches in a feature model and show where in this design space the participants ofthe 2013 Language Workbench Challenge reside. We compare these workbenchesbased on a DSL for questionnaires that was realized in all workbenches.

1 Introduction

Language workbenches, a term popularized by Martin Fowler in 2005 [19], are toolsthat support the efficient definition, reuse and composition of languages and their IDEs.Language workbenches make the development of new languages affordable and, there-fore, support a new quality of language engineering, where sets of syntactically andsemantically integrated languages can be built with comparably little effort. This canlead to multi-paradigm and language-oriented programming environments [8, 61] thatcan address important software engineering challenges.

Almost as long as programmers have built languages, they have also built tools tomake language development easier and language use more productive. The earliestlanguage workbench probably was SEM [52]; other early ones include MetaPlex [7],Metaview [51], QuickSpec [43], and MetaEdit [48]. Graphical workbenches that arestill being developed today include MetaEdit+ [28], DOME [24], and GME [38]. On

In: Software Language Engineering, 2013, 197-217

Page 23: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Language workbench = IDE + meta language(s)

to build languages + IDEs

Page 24: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Questionnaire language (QL)

Page 25: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

QLS: QL styling

Page 27: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

All LWBs proved adequate to fulfill basic assignment

in under 2500 SLOC

Page 28: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Figure 7: Editor support for the development of questionnaires: code coloring, codefolding, reference resolution, type information.

and activate these services by importing the library locally. For the questionnaire DSLwe define simple editor services in the library quest.lang.Editor shown in Figure 6.

In addition, based on the static analyses, we define reference resolution (CTRL-click)and show type information in hover-help pop-ups. In the definition of these editor services,we simply extract the relevant analysis data from the syntax tree using retrieve-reference

etc. We illustrate the resulting editor for the questionnaire DSL in Figure 7.

6 Conclusion

We have shown how to embed a DSL for questionnaires in SugarJ. We have defined domain-specific syntax, semantics, analyses, and editor services. In particular, we were able to

10

13

The definition of the calculation and assignment of valueResidue's value we already saw in the basic example is now placed in this diagram. Furthermore, depending on the financial situation of the house owner after the transaction, further questions are revealed. Similarly, if a house was sold during the first half of the year, more questions are to be asked. As a side note, the left-right ordering of comparison operator’s terms follows the same logic as ordering of terms in arithmetic operations (i.e. left/topmost element is considered as the left-hand side term, visualized by small arrow heads). In the case there is a type mismatch between the compared elements, a warning sign will appear on top of the comparison operator symbol:

Generating code for this example will result with a questionnaire page with more dynamic behavior:

9 out of 10 included IDE support for QL

Page 29: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Figure 7: Editor support for the development of questionnaires: code coloring, codefolding, reference resolution, type information.

and activate these services by importing the library locally. For the questionnaire DSLwe define simple editor services in the library quest.lang.Editor shown in Figure 6.

In addition, based on the static analyses, we define reference resolution (CTRL-click)and show type information in hover-help pop-ups. In the definition of these editor services,we simply extract the relevant analysis data from the syntax tree using retrieve-reference

etc. We illustrate the resulting editor for the questionnaire DSL in Figure 7.

6 Conclusion

We have shown how to embed a DSL for questionnaires in SugarJ. We have defined domain-specific syntax, semantics, analyses, and editor services. In particular, we were able to

10

13

The definition of the calculation and assignment of valueResidue's value we already saw in the basic example is now placed in this diagram. Furthermore, depending on the financial situation of the house owner after the transaction, further questions are revealed. Similarly, if a house was sold during the first half of the year, more questions are to be asked. As a side note, the left-right ordering of comparison operator’s terms follows the same logic as ordering of terms in arithmetic operations (i.e. left/topmost element is considered as the left-hand side term, visualized by small arrow heads). In the case there is a type mismatch between the compared elements, a warning sign will appear on top of the comparison operator symbol:

Generating code for this example will result with a questionnaire page with more dynamic behavior:

7 out of 10 included IDE support for QLS

Page 30: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Figure 7: Editor support for the development of questionnaires: code coloring, codefolding, reference resolution, type information.

and activate these services by importing the library locally. For the questionnaire DSLwe define simple editor services in the library quest.lang.Editor shown in Figure 6.

In addition, based on the static analyses, we define reference resolution (CTRL-click)and show type information in hover-help pop-ups. In the definition of these editor services,we simply extract the relevant analysis data from the syntax tree using retrieve-reference

etc. We illustrate the resulting editor for the questionnaire DSL in Figure 7.

6 Conclusion

We have shown how to embed a DSL for questionnaires in SugarJ. We have defined domain-specific syntax, semantics, analyses, and editor services. In particular, we were able to

10

13

The definition of the calculation and assignment of valueResidue's value we already saw in the basic example is now placed in this diagram. Furthermore, depending on the financial situation of the house owner after the transaction, further questions are revealed. Similarly, if a house was sold during the first half of the year, more questions are to be asked. As a side note, the left-right ordering of comparison operator’s terms follows the same logic as ordering of terms in arithmetic operations (i.e. left/topmost element is considered as the left-hand side term, visualized by small arrow heads). In the case there is a type mismatch between the compared elements, a warning sign will appear on top of the comparison operator symbol:

Generating code for this example will result with a questionnaire page with more dynamic behavior:

in under 2500 SLOC

Page 31: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

+

Rats!

Parser Interpreter!Type checker

≈ 3100 SLOC

https://github.com/software-engineering-amsterdam/sea-of-ql

Compare with vanilla implementation of QL

Median over 48 implementations

Page 32: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Tool diversity• Young / Mature (1 yr ≤ age ≤ 18 yr)

• Academic / Industrial (5 / 5)

• Many meta languages / single meta language

• Textual / projectional / graphical

• No observable bias towards any category

Page 33: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Summarizing

• MDD seems to work best using small DSLs

• on critical parts of software systems.

• Code generation is not the main benefit, —

• formalizing and documenting architecture is

Page 34: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Graphical vs text

• Benefits of one over another are not clear cut

• Graphical seems harder to comprehend

• Fewer constraints = higher risk of confusion

• In any case: reading graphics is an acquired skill

Page 35: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

UML

• Think twice before using the UML.

• It’s a general purpose modeling language, —

• and not a very good one at that.

• Hardly anyone uses it (for code generation)

Page 36: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Tools

• Be prepared to heavily tailor your tools, —

• or even build your own.

• Tools need to adapt to process, not the other way round.

Page 37: What do we know about the success of MDD? An update from ...homepages.cwi.nl/~storm/WhatDoWeKnowAboutMDD.pdf · Practice in Model-Driven Engineering Jon Whittle, John Hutchinson,

Language workbenches

• The state-of-the-art is very diverse, —

• Yet all workbenches seem functionally adequate.

• Marked productivity benefit compared to vanilla implementation strategies.