semantic web 1 (2013) 1–14 ios press wysiwym ...semantic web 1 (2013) 1–14 1 ios press wysiwym...

14
Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic Content Ali Khalili * , Sören Auer, University of Leipzig, Institute of Computer Science, AKSW Group, Augustusplatz 10, D-04009 Leipzig, Germany E-mail: {lastname}@informatik.uni-leipzig.de Abstract. The Semantic Web and Linked Data gained traction in the last years. However, the majority of information still is contained in unstructured documents. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. Semantic structuring on the other hand enables the (semi-)automatic integration, repurposing, rearrangement of information. NLP technologies and formalisms for the integrated representation of unstructured and semantic content (such as RDFa and Microdata) aim at bridging this semantic gap. However, in order for humans to truly benefit from this integration, we need ways to author, visualize and explore unstructured and semantic information in a holistic manner. In this paper, we present the WYSIWYM (What You See is What You Mean) concept, which addresses this issue and formalizes the binding between semantic representation models and UI elements for authoring, visualizing and exploration. With RDFaCE and Pharmer we present and evaluate two complementary showcases implementing the WYSIWYM concept for different application domains. Keywords: Visualization, Authoring, Exploration, Semantic Web, WYSIWYM, WYSIWYG, Visual Mapping 1. Introduction The Semantic Web and Linked Data gained traction in the last years. However, the majority of informa- tion still is contained in and exchanged using unstruc- tured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. Semantic structuring on the other hand provides a wide range of advantages compared to unstructured information. It facilitates a number of important aspects of informa- tion management: For search and retrieval enriching documents with semantic representations helps to create more efficient and effective search interfaces, * Corresponding author. E-mail: [email protected] leipzig.de. such as faceted search [27] or question answer- ing [16]. In information presentation semantically enriched documents can be used to create more sophis- ticated ways of flexibly visualizing information, such as by means of semantic overlays as de- scribed in [3]. For information integration semantically enriched documents can be used to provide unified views on heterogeneous data stored in different applica- tions by creating composite applications such as semantic mashups [2]. To realize personalization, semantic documents provide customized and context-specific informa- tion which better fits user needs and will result in delivering customized applications such as per- sonalized semantic portals [23]. For reusability and interoperability enriching documents with semantic representations facili- tates exchanging content between disparate sys- 1570-0844/13/$27.50 c 2013 – IOS Press and the authors. All rights reserved

Upload: others

Post on 04-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Semantic Web 1 (2013) 1–14 1IOS Press

WYSIWYM – Integrated Visualization,Exploration and Authoring of Un-structuredand Semantic ContentAli Khalili ∗, Sören Auer,University of Leipzig, Institute of Computer Science, AKSW Group, Augustusplatz 10, D-04009 Leipzig, GermanyE-mail: {lastname}@informatik.uni-leipzig.de

Abstract. The Semantic Web and Linked Data gained traction in the last years. However, the majority of information still iscontained in unstructured documents. This can also not be expected to change, since text, images and videos are the naturalway how humans interact with information. Semantic structuring on the other hand enables the (semi-)automatic integration,repurposing, rearrangement of information. NLP technologies and formalisms for the integrated representation of unstructuredand semantic content (such as RDFa and Microdata) aim at bridging this semantic gap. However, in order for humans to trulybenefit from this integration, we need ways to author, visualize and explore unstructured and semantic information in a holisticmanner. In this paper, we present the WYSIWYM (What You See is What You Mean) concept, which addresses this issue andformalizes the binding between semantic representation models and UI elements for authoring, visualizing and exploration.With RDFaCE and Pharmer we present and evaluate two complementary showcases implementing the WYSIWYM concept fordifferent application domains.

Keywords: Visualization, Authoring, Exploration, Semantic Web, WYSIWYM, WYSIWYG, Visual Mapping

1. Introduction

The Semantic Web and Linked Data gained tractionin the last years. However, the majority of informa-tion still is contained in and exchanged using unstruc-tured documents, such as Web pages, text documents,images and videos. This can also not be expected tochange, since text, images and videos are the naturalway how humans interact with information. Semanticstructuring on the other hand provides a wide rangeof advantages compared to unstructured information.It facilitates a number of important aspects of informa-tion management:

– For search and retrieval enriching documentswith semantic representations helps to createmore efficient and effective search interfaces,

*Corresponding author. E-mail: [email protected].

such as faceted search [27] or question answer-ing [16].

– In information presentation semantically enricheddocuments can be used to create more sophis-ticated ways of flexibly visualizing information,such as by means of semantic overlays as de-scribed in [3].

– For information integration semantically enricheddocuments can be used to provide unified viewson heterogeneous data stored in different applica-tions by creating composite applications such assemantic mashups [2].

– To realize personalization, semantic documentsprovide customized and context-specific informa-tion which better fits user needs and will resultin delivering customized applications such as per-sonalized semantic portals [23].

– For reusability and interoperability enrichingdocuments with semantic representations facili-tates exchanging content between disparate sys-

1570-0844/13/$27.50 c© 2013 – IOS Press and the authors. All rights reserved

Page 2: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

2 Khalili et al. / WYSIWYM

tems and enables building applications such asexecutable papers [18].

Natural Language Processing (NLP) technologies(e.g. named entity recognition and relationship extrac-tion) as well as formalisms for the integrated represen-tation of unstructured and semantic content (such asRDFa and Microdata) aim at bridging the semantic gapbetween unstructured and semantic representation for-malisms. However, in order for humans to truly benefitfrom this integration, we need ways to author, visual-ize and explore unstructured and semantic informationin a holistic manner.

In this paper, we present the WYSIWYM (WhatYou See Is What You Mean) concept, which addressesthe issue of an integrated visualization, explorationand authoring of un-structured and semantic content.Our WYSIWYM concept formalizes the binding be-tween semantic representation models and UI elementsfor authoring, visualizing and exploration. We anal-yse popular tree, graph and hyper-graph based seman-tic representation models and elicit a list of semanticrepresentation elements, such as entities, various rela-tionships and attributes. We provide a comprehensivesurvey of common UI elements for authoring, visu-alizing and exploration, which can be configured andbound to individual semantic representation elements.Our WYSIWYM concept also comprises cross-cuttinghelper components, which can be employed withina concrete WYSIWYM interface for the purpose ofautomation, annotation, recommendation, personaliza-tion etc.

With RDFaCE and Pharmer we present and evalu-ate two complementary showcases implementing theWYSIWYM concept for different domains. RDFaCEis domain agnostic editor for text content with em-bedded semantic in the form of RDFa or Microdata.Pharmer is a WYSIWYM interface for the authoring ofsemantic prescriptions and thus targeting the medicaldomain. Our evaluation of both tools with end-users(in case of RDFaCE) and domain experts (in case ofPharmer) shows, that WYSIWYM interfaces providegood usability, while retaining benefits of a truly se-mantic representation.

The contributions of this work are in particular:

1. A formalization of the WYSIWYM conceptbased on definitions for the WYSIWYM model,binding and concrete interfaces.

2. A comprehensive survey of semantic represen-tation elements of tree, graph and hyper-graph

knowledge representation formalisms as well asUI elements for authoring, visualization and ex-ploration of such elements.

3. Two complementary use cases, which evaluatedifferent, concrete WYSIWYM interfaces in ageneric as well as domain specific context.

The WYSIWYM formalization can be used as a ba-sis for implementations; allows to evaluate and classifyexisting user interfaces in a defined way; provides aterminology for software engineers, user interface anddomain experts to communicate efficiently and effec-tively. We aim to contribute with this work to mak-ing Semantic Web applications more user friendly andultimately to create an ecosystem of flexible UI com-ponents, which can be reused, repurposed and chore-ographed to accommodate the UI needs of dynami-cally evolving information structures.

The remainder of this article is structured as follows:In Section 2, we describe the background of our workand discuss the related work. Section 3 describes thefundamental WYSIWYM concept proposed in the pa-per. Subsections of Section 3 present the different com-ponents of the WYSIWYM model. In Section 4, weintroduce two implemented WYSIWYM interfaces to-gether with their evaluation results. Finally, Section 5concludes with an outlook on future work.

2. Related Work

WYSIWYG. The term WYSIWYG as an acronym forWhat-You-See-Is-What-You-Get is used in computingto describe a system in which content (text and graph-ics) displayed on-screen during editing appears in aform closely corresponding to its appearance whenprinted or displayed as a finished product. The firstusage of the term goes back to 1974 in the print in-dustry to express the idea that what the user sees onthe screen is what the user gets on the printer. Xe-rox PARC’s Bravo was the first WYSIWYG editor-formatter [19]. It was designed by Butler Lampson andCharles Simonyi who had started working on theseconcepts around 1970 while at Berkeley. Later on bythe emergence of Web and HTML technology, theWYSIWYG concept was also utilized in Web-basedtext editors. The aim was to reduce the effort requiredby users to express the formatting directly as validHTML markup. In a WYSIWYG editor users can editcontent in a view which matches the final appear-

Page 3: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 3

ance of published content with respect to fonts, head-ings, layout, lists, tables, images and structure. Be-cause using a WYSIWYG editor may not require anyHTML knowledge, they are often easier for an aver-age computer user to get started with. The first pro-grams for building Web pages with a WYSIWYG in-terface were Netscape Gold, Claris HomePage, andAdobe PageMill.

WYSIWYG text authoring is meanwhile ubiquitouson the Web and part of most content creation andmanagement workflows. It is part of content manage-ment cystems (CMS), weblogs, wikis, fora, productdata management systems and online shops, just tomention a few. However, the WYSIWYG model hasbeen criticized, primarily for the verbosity, poor sup-port of semantics and low quality of the generatedcode and there have been voices advocating a changetowards a WYSIWYM (What-You-See-Is-What-You-Mean) model [26,24].

WYSIWYM. The first use of the WYSIWYM termoccurred in 1995 aiming to capture the separation ofpresentation and content when writing a document.The LyX editor1 was the first WYSIWYM word pro-cessor for structure-based content authoring. Insteadof focusing on the format or presentation of the doc-ument, a WYSIWYM editor preserves the intendedmeaning of each element. For example, page head-ers, sections, paragraphs, etc. are labeled as such inthe editing program, and displayed appropriately in thebrowser. Another usage of the WYSIWYM term wasby Power et al. [22] in 1998 as a solution for Sym-bolic Authoring. In symbolic authoring the author gen-erates language-neutral “symbolic" representations ofthe content of a document, from which documents ineach target language are generated automatically, us-ing Natural Language Generation technology. In thisWhat-You-See-Is-What-You-Meant approach, the lan-guage generator was used to drive the user interface(UI) with support of localization and multilinguality.Using the WYSIWYM natural language generationapproach, the system generates a feed-back text for theuser that is based on a semantic representation. Thisrepresentation can be edited directly by the user by ma-nipulating the feedback text.

The WYSIWYM term as defined and used in thispaper targets the novel aspect of integrated visualiza-tion, exploration and authoring of unstructured and se-mantic content. The rationale of our WYSIWYM con-

1http://www.lyx.org/

cept is to enrich the existing WYSIWYG presenta-tional view of the content with UI components reveal-ing the semantics embedded in the content and enablethe exploration and authoring of semantic content. In-stead of separating presentation, content and meaning,our WYSIWYM approach aims to integrate these as-pects to facilitate the process of Semantic Content Au-thoring. There are already some approaches (i.e. vi-sual mapping techniques), which go into the directionof integrated visualization and authoring of structuredcontent.

Visual Mapping Techniques. Visual mapping tech-niques (a.k.a. knowledge representation techniques)are methods to graphically represent knowledge struc-tures. Most of them have been developed as paper-based techniques for brainstorming, learning facilita-tion, outlining or to elicit knowledge structures. Ac-cording to their basic topology, most of them can be re-lated to the following fundamentally different primaryapproaches [6,25]:

– Mind-Maps. Mind-maps are created by drawingone central topic in the middle together with la-beled branches and sub-branches emerging fromit. Instead of distinct nodes and links, mind-mapsonly have labeled branches. A mind-map is a con-nected directed acyclic graph with hierarchy asits only type of relation. Outlines are a similartechnique to show hierarchical relationships us-ing tree structure. Mind-maps and outlines are notsuitable for relational structures because they areconstrained to the hierarchical model.

– Concept Maps. Concept maps consist of labelednodes and labeled edges linking all nodes to aconnected directed graph. The basic node and linkstructure of a connected directed labeled graphalso forms the basis of many other modeling ap-proaches like Entity-Relationship (ER) diagramsand Semantic Networks. These forms have thesame basic structure as concept maps but withmore formal types of nodes and links.

– Spatial Hypertext. A spatial hypertext is a setof text nodes that are not explicitly connectedbut implicitly related through their spatial layout,e.g., through closeness and adjacency — similarto a pin-board. Spatial hypertext can show fuzzilyrelated items. To fuzzily relate two items in a spa-tial hypertext schema, they are simply placed nearto each other, but possibly not quite as near as to athird object. This allows for so-called “construc-tive ambiguity” and is an intuitive way to deal

Page 4: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

4 Khalili et al. / WYSIWYM

with vague relations and orders. Spatial Hyper-text abandons the concept of explicitly interrelat-ing objects. Instead, it uses spatial positioning asthe basic structure.

Binding data to UI elements. There are already manyapproaches and tools which address the binding be-tween data and UI elements for visualizing and explor-ing semantically structured data. Dadzie and Rowe [4]present the most exhaustive and comprehensive sur-vey to date of these approaches. For example, Fres-nel [21] is a display vocabulary for core RDF concepts.Fresnel’s two foundational concepts are lenses and for-mats. Lenses define which properties of an RDF re-source, or group of related resources, are displayed andhow those properties are ordered. Formats determinehow resources and properties are rendered and providehooks to existing styling languages such as CSS.Parallax, Tabulator, Explorator, Rhizomer, Sgvizler,Fenfire, RDF-Gravity, IsaViz and i-Disc for TopicMaps are examples of tools available for visualizingand exploring semantically structured data. In thesetools the binding between semantics and UI elementsis mostly performed implicitly, which limits their ver-satility. However, an explicit binding as advocated byour WYSIWYM model can be potentially added tosome of these tools.

In contrast to the structured content, there are manyapproaches and tools which allow binding semanticdata to UI elements within unstructured content (cf.our comprehensive literature study [10]). As an exam-ple, Dido [9] is a data-interactive document which letsend users author semantic content mixed with unstruc-tured content in a web-page. Dido inherits data explo-ration capabilities from the underlying Exhibit2 frame-work. Loomp as a prove-of-concept for the One ClickAnnotation [1] strategy is another example in this con-text. Loomp is a WYSIWYG web editor for enrich-ing content with RDFa annotations. It employs a par-tial mapping between UI elements and data to hide thecomplexity of creating semantic data.

3. WYSIWYM Concept

In this section we introduce the fundamental WYSI-WYM concept and formalize key elements of the con-cept. Formalizing the WYSIWYM concept has a num-ber of advantages: First, the formalization can be used

2http://simile-widgets.org/exhibit/

Semantic Representation Data Models

Visualization

Techniques

Exploration

TechniquesAuthoring

Techniques

Helper components

Bindings

Configs Configs Configs

WYSIWYM Interface

Fig. 1. Schematic view of the WYSIWYM model.

as a basis for design and implementation of novel ap-plications for authoring, visualization, and explorationof semantic content (cf. Section 4). The formalizationserves the purpose of providing a terminology for soft-ware engineers, user interface and domain experts tocommunicate efficiently and effectively. It provides in-sights into and an understanding of the requirementsas well as corresponding UI solutions for proper de-sign and implementation of semantic content manage-ment applications. Secondly, it allows to evaluate andclassify existing user interfaces according to the con-ceptual model in a defined way. This will highlightthe gaps in existing applications dealing with semanticcontent.

Figure 1 provides a schematic overview of theWYSIWYM concept. The rationale is that elements ofa knowledge representation formalism (or data model)are connected to suitable UI elements for visualiza-tion, exploration and authoring. Formalizing this con-ceptual model results in three core definitions (1) forthe abstract WYSIWYM model, (2) bindings betweenUI and representation elements as well as (3) a con-crete instantiation of the abstract WYSIWYM model,which we call a WYSIWYM interface.

Definition 1 (WYSIWYM model). The WYSIWYMmodel can be formally defined as a quintuple (D,V,X, T,H)where:

– D is a set of semantic representation data models,where each Di ∈ D has an associated set of datamodel elements EDi

;– V is a set of tuples (v, Cv), where v is a visual-

ization technique and Cv a set of possible config-urations for the visualization technique v;

– X is a set of tuples (x,Cx), where x is an explo-ration technique and Cx a set of possible config-urations for the exploration technique x;

Page 5: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 5

– T is a set of tuples (t, Ct), where t is an authoringtechnique and Ct a set of possible configurationsfor the authoring technique t;

– H is a set of helper components.

The WYSIWYM model represents an abstract con-cept from which concrete interfaces can be derivedby means of bindings between semantic representationmodel elements and configurations of particular UI el-ements.

Definition 2 (Binding). A binding b is a function whichmaps each element of a semantic representation modele (e ∈ EDi ) to a set of tuples (ui, c), where ui is auser interface technique ui (ui ∈ V ∪X ∪ T ) and c isa configuration c ∈ Cui.

Figure 4 gives an overview on all data model(columns) and UI elements (rows) and how they canbe bound together using a certain configuration (cells).The shades of gray in a certain cell indicate the suit-ability of a certain binding between a particular UI anddata model element. Once a selection of data modelsand UI elements was made and both are bound to eachother encoding a certain configuration in a binding,we attain a concrete instantiation of our WYSIWYMmodel called WYSIWYM interface.

Definition 3 (WYSIWYM interface). An instantiationof the WYSIWYM model I called WYSIWYM interfacenow is a hextuple (DI , VI , XI , TI , HI , bI), where:

– DI is a selection of semantic representation datamodels (DI ⊂ D);

– VI is a selection of visualization techniques(VI ⊂ V );

– XI is a selection of exploration techniques(XI ⊂ X);

– TI is a selection of authoring techniques(TI ⊂ T );

– HI is a selection of helper components(HI ⊂ H);

– bI is a binding which binds a particular occur-rence of a data model element to a visualization,exploration and/or authoring technique.

Note, that we limit the definition to one binding,which means that only one semantic representationmodel is supported in a particular WYSIWYM inter-face at a time. It could be also possible to support sev-eral semantic representation models (e.g. RDFa andMicrodata) at the same time. However, this can be con-fusing to the user, which is why we deliberately ex-cluded this case in our definition. In the remainder

Semantic expressiveness

Co

mp

lex

ity o

f vis

ua

l m

ap

pin

g

low high

hig

h

low

Mind-maps

Concept maps

Spatial hypertext

Tree-based

Graph-based

Hypergraph-based

Microdata Microformat

Topic Maps XTM LTM CTM AsTMa

RDF RDF/XML Turtle/N3/N-Triples RDFa JSON-LD

Fig. 2. Comparison of existing visual mapping techniques in termsof semantic expressiveness and complexity of visual mapping.

of this sections we discuss the different parts of theWYSIWYM concept in more detail.

3.1. Semantic Representation Models

Semantic representation models are conceptual datamodels to express the meaning of information therebyenabling representation and interchange of knowledge.Based on their expressiveness, we can roughly dividepopular semantic representation models into the threecategories tree-based, graph-based and hypergraph-based (cf. Figure 2). Each semantic representationmodel comprises a number of representation elements,such as various types of entities and relationships.For visualization, exploration and authoring it is ofparamount importance to bind the most suitable UI el-ements to respective representation elements. In the se-quel we briefly discuss the three different types of rep-resentation models.

Tree-based. This is the simplest semantic represen-tation model, where semantics is encoded in a tree-like structure. It is suited for representing taxonomicknowledge, such as thesauri, classification schemes,subject heading lists, concept hierarchies or mind-maps. It is used extensively in biology and life sci-ences, for example, in the APG III system (AngiospermPhylogeny Group III system) of flowering plant classi-fication, as part of the Dimensions of the XBRL (eXten-sible Business Reporting Language) or generically inthe SKOS (Simple Knowledge Organization System).Elements of tree-based semantic representation usu-ally include:

Page 6: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

6 Khalili et al. / WYSIWYM

– E1: Item – e.g. Magnoliidae, the item repre-senting all flowering plants.

– E2: Item type – e.g. biological term forMagnoliidae.

– E3: Item-subitem relationships – e.g. Magnoliidaereferring to subitem magnolias.

– E4: Item property value – e.g. the synonymflowering plant for the item Magnoliidae.

– E5: Related items – e.g. the sibling item Eudicotsto Magnoliidae.

Tree-based data can be serialized as Microdata orMicroformats.

Graph-based. This semantic representation modeladds more expressiveness compared to simple tree-based formalisms. The most prominent representativeis the RDF data model, which can be seen as a set oftriples consisting of subject, predicate, object, whereeach component can be a URI, the object can be aliteral and subject as well as object can be a blanknode. The most distinguishing features of RDF froma simple tree-based model are: the distinction of enti-ties in classes and instances as well as the possibilityto express arbitrary relationships between entities. Thegraph-based model is suited for representing combi-natorial schemes such as concept maps. Graph-basedmodels are used in a very broad range of domains,for example, in the FOAF (Friend of a Friend) for de-scribing people, their interests and interconnections ina social network, in MusicBrainz to publish informa-tion about music albums, in the medical domain (e.g.DrugBank, Diseasome, ChEMBL, SIDER) to describethe relations between diseases, drugs and genes, orgenerically in the SIOC (Semantically-Interlinked On-line Communities) vocabulary. Elements of RDF as atypical graph-based data model are:

– E1: Instances – e.g. Warfarin as a drug.– E2: Classes – e.g. anticoagulants drug

for Warfarin.– E3: Relationships between entities (instances

or classes) – e.g. the interaction betweenAspirin as an antiplatelet drug and Warfarinwhich will increase the risk of bleeding.

– E4: Literal property values – e.g. the halflife forthe Amoxicillin.

∗ E41 : Value – e.g. 61.3 minutes.∗ E42 : Language tag – e.g. en.∗ E43 : Datatype – e.g. xsd:float.

RDF-based data can be serialized in various for-mats, such as RDFa, RDF/XML, JSON-LD or Turtle/N3/N-Triples.

Hypergraph-based. A hypergraph is a generalizationof a graph in which an edge can connect any num-ber of vertices. Since hypergraph-based models al-low n-ary relationships between arbitrary number ofnodes, they provide a higher level of expressivenesscompared to tree-based and graph-based models. Themost prominent representative is the Topic Maps datamodel developed as an ISO/IEC standard which con-sists of topics, associations and occurrences. The se-mantic expressivity of Topic Maps is, in many ways,equivalent to that of RDF, but the major differences arethat Topic Maps (i) provide a higher level of seman-tic abstraction (providing a template of topics, associa-tions and occurrences, while RDF only provides a tem-plate of two arguments linked by one relationship) and(hence) (ii) allow n-ary relationships (hypergraphs) be-tween any number of nodes, while RDF is limited totriplets. The hypergraph-based model is suited for rep-resenting complex schemes such as spatial hypertext.Hypergraph-based models are used for a variety of ap-plications. Amongst them are musicDNA3 as an indexof musicians, composers, performers, bands, artists,producers, their music, and the events that link themtogether, TM4L (Topic Maps for e-Learning), clini-cal decision support systems and enterprise informa-tion integration. Elements of Topic Maps as a typicalhypergraph-based data model are:

– E1: Topic name – e.g. University of Leipzig.– E2: Topic type – e.g. organization for Uni-

versity of Leipzig.– E3: Topic associations – e.g. member of aproject which has other organization partners.

– E4: Topic role in association e.g. coordinator.– E5: Topic occurrences – e.g. address.

∗ E51 : value – e.g. Augustusplatz 10,04109 Leipzig.

∗ E52 : datatype – e.g. text.

Topic Maps-based data can be serialized as anXML-based syntax called XTM (XML Topic Map),LTM (Linear Topic Map Notation), CTM (CompactTopic Maps Notation) and AsTMa (Asymptotic TopicMap Notation).

3http://www.musicdna.info/

Page 7: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 7

Fig. 3. Screenshots of user interface techniques for visualization and exploration: 1-framing using borders, 2-framing using backgrounds, 3-videosubtitle, 4-line connectors and arrow connectors, 5-bar layouts, 6-text formatting, 7-image color effects, framing and line connectors, 8-expand-able callout, 9-marking with icons, 10-tooltip callout, 11-faceting

3.2. Visualization

The primary objectives of visualization are to present,transform, and convert semantic data into a visual rep-resentation, so that, humans can read, query and editthem efficiently. We divide existing techniques for vi-sualization of knowledge encoded in text, images andvideos into the three categories Highlighting, Associ-ating and Detail view. Highlighting includes UI tech-niques which are used to distinguish or highlight a partof an object (i.e. text, image or video) from the wholeobject. Associating deals with techniques that visual-ize the relation between some parts of an object. Detailview includes techniques which reveal detailed infor-mation about a part of an object. For each of the abovecategories, the related UI techniques are as follows:

- Highlighting.

– V1: Framing and Segmentation (borders, over-lays and backgrounds). This technique can be ap-plied to text, images and videos, we enclose a se-mantic entity in a coloured border, background oroverlay. Different border styles (colours, width,

types), background styles (colours, patterns) oroverlay styles (when applied to images and videos)can be used to distinguish different types of se-mantic entities (cf. Figure 3 no. 1, 2). The tech-nique is already employed in social networkingwebsites such as Google Plus and Facebook totag people within images.

– V2: Text formatting (color, font, size, etc.). In thistechnique different text styles such as font fam-ily, style, weight, size, colour, shadows and othertext decoration techniques are used to distinguishsemantic entities within a text (cf. Figure 3 no.6). The problem with this technique is that inan HTML document, the applied semantic stylesmight overlap with existing styles in the docu-ment and thereby add ambiguity to recognizingsemantic entities.

– V3: Image color effects. This technique is simi-lar to text formatting but applied to images andvideos. Different image color effects such asbrightness/contrast, shadows, glows, bevel/embossare used to highlight semantic entities within animage (cf. Figure 3 no. 7). This technique suf-

Page 8: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

8 Khalili et al. / WYSIWYM

fers from the problem that the applied effectsmight overlap with the existing effects in the im-age thereby making it hard to distinguish the se-mantic entities.

– V4: Marking (icons appended to text or image). Inthis technique, which can be applied to text, im-ages and videos, we append an icon as a markerto the part of object which includes the semanticentity (cf. Figure 3 no. 9). The most popular useof this technique is currently within maps to indi-cate specific points of interest. Different types oficons can be used to distinguish different types ofsemantic or correlated entities.

– V5: Bleeping. A bleep is a single short high-pitched signal in videos. Bleeping can be used tohighlight semantic entities within a video. Differ-ent type of bleep signals can be defined to distin-guish different types of semantic entities.

– V6: Speech (in videos). In this technique a videois augmented by some speech indicating the se-mantic entities and their types within the video.

- Associating.

– V7: Line connectors. Using line connectors is thesimplest way to visualize the relation between se-mantic entities in text, images and videos (cf. Fig-ure 3 no. 4). If the value of a property is avail-able in the text, line connectors can also reflectthe item property values. Problematic is that nor-mal line connectors can not express the directionof a relation.

– V8: Arrow connectors. Arrow connectors are ex-tended line connectors with arrows to express thedirection of a relation in a directed graph.

- Detail view.

– V9: Callouts. A callout is a string of text con-nected by a line, arrow, or similar graphic to a partof text, image or video giving information aboutthat part. It is used in conjunction with a cursor,usually a pointer. The user hovers the pointer overan item, without clicking it, and a callout appears(cf. Figure 3 no. 10). Callouts come in differentstyles and templates such as infotips, tooltips, hintand popups. Different sort of semantic informa-tion can be embedded in a callout to indicate thetype of semantic entities, property values and re-lationships. Another variant of callouts is the sta-tus bar which displays semantic information ina bar appended to the text, image or video con-tainer. A problem with dynamic callouts is that

they do not appear on mobile devices (by hover),since there is no cursor.

– V10: Video subtitles. Subtitles are textual versionsof the dialog or commentary in videos. They areusually displayed at the bottom of the screen andare employed for written translation of a dialogin a foreign language. Video subtitles can be usedto reflect detailed semantics embedded in a videoscene when watching the video. A problem withsubtitles is efficiently scaling the text size and re-lating text to semantic entities when several se-mantic entities exist in a scene.

3.3. Exploration

To increase the effectiveness of visualizations, usersneed to be capable to dynamically explore the visualrepresentation of the semantic data. The dynamic ex-ploration of semantic data will result in faster and eas-ier comprehension of the targeted content. Techniquesfor exploration of semantics encoded in text, imagesand videos include:

– X1: Zooming. In a zoomable UI, users can changethe scale of the viewed area in order to see moredetail or less. Zooming in a semantic entity canreveal further details such as property value or en-tity type. Zooming out can be employed to revealthe relations between semantic entities in a text,image or video. Supporting rich dynamics by con-figuring different visual representations for se-mantic objects at different sizes is a requirementfor a zoomable UI. The iMapping approach[6]which is implemented in the semantic desktop isan example of the zooming technique.

– X2: Faceting. Faceted browsing is a techniquefor accessing information organized according toa faceted classification system, allowing users toexplore a collection of information by applyingmultiple filters (cf. Figure 3 no. 11). Definingfacets for each component of the predefined se-mantic models enable users to browse the under-lying knowledge space by iteratively narrowingthe scope of their quest in a predetermined order.One of the main problems with faceted browsersis the increased number of choices presented tothe user at each step of the exploration [5].

– X3: Bar layouts. In the bar layout, each semanticentity within the text is indicated by a vertical barin the left or right margin (cf. Figure 3 no. 5). Thecolour of the bar reflects the type of the entity. The

Page 9: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 9

bars are ordered by length and order in the text.Nested bars can be used to show the hierarchiesof entities. Semantic entities in the text are high-lighted by a mouse-over the corresponding bar.This approach is employed in Loomp [17].

– X4: Expandable callouts. Expandable calloutsare interactive and dynamic callouts which enableusers to explore the semantic data associated toa predefined semantic entity (cf. Figure 3 no. 8).This technique is employed in OntosFeeder [11].

3.4. Authoring

Semantic authoring aims to add more meaning todigitally published documents. If users do not onlypublish the content, but at the same time describe whatit is they are publishing, then they have to adopt astructured approach to authoring. A semantic author-ing UI is a human accessible interface with capabil-ities for writing and modifying semantic documents.The following techniques can be used for authoring ofsemantics encoded in text, images and videos:

– T1: Form editing. In form editing, a user employsexisting form elements such as input/check/radioboxes, drop-down menu, slider, spinner, buttons,date/color picker etc. for content authoring.

– T2: Inline edit. Inline editing is the process ofediting items directly in the view by performingsimple clicks, rather than selecting items and thennavigating to an edit form and submitting changesfrom there.

– T3: Drawing. Drawing as part of informal userinterfaces [14], provides a natural human inputto annotate an object by augmenting the ob-ject with human-understandable sketches. For in-stance, users can draw a frame around semanticentities, draw a line between related entities etc.Special shapes can be drawn to indicate differententity types or entity roles in a relation.

– T4: Drag and drop. Drag and drop is a pointingdevice gesture in which the user selects a virtualobject by grabbing it and dragging it to a differentlocation or onto another virtual object. In general,it can be used to invoke many kinds of actions, orcreate various types of associations between twoabstract objects.

– T5: Context menu. A context menu (also calledcontextual, shortcut, or pop-up menu) is a menuthat appears upon user interaction, such as a rightbutton mouse click. A context menu offers a lim-

ited set of choices that are available in the currentstate, or context.

– T6: (Floating) Ribbon editing. A ribbon is a com-mand bar that organizes functions into a series oftabs or toolbars at the top of the editable content.Ribbon tabs/toolbars are composed of groups,which are a labeled set of closely related com-mands. A floating ribbon is a ribbon that appearswhen user rolls the mouse over a target area.A floating ribbon increases usability by bringingedit functions as close as possible to the user’spoint of focus. The Aloha WYSIWYG editor4 is anexample of floating ribbon based content author-ing.

– T7: Voice commands. Voice commands permit theuser’s hands and eyes to be busy with anothertask, which is particularly valuable when usersare in motion or outside. Users tend to preferspeech for functions like describing objects, setsand subsets of objects [20].

– T8: (Multi-touch) gestures. A gesture (a.k.a. signlanguage) is a form of non-verbal communica-tion in which visible bodily actions communicateparticular messages. Technically, different meth-ods can be used for detecting and identifying ges-tures. Movement-sensor-based and camera-basedapproaches are two commonly used methods forthe recognition of in-air gestures [15]. Multi-touch gestures are another type of gestures whichare defined to interact with multi-touch devicessuch as modern smartphones and tablets. Userscan use gestures to determine semantic entities,their types and relationship among them. Themain problem with gestures is their high level ofabstraction which makes it hard to assert concreteproperty values.

3.5. Bindings

Figure 4 surveys possible bindings between the userinterface and semantic representation elements. Thebindings were derived based on the following method-ology:

1. We first analyzed existing semantic representa-tion models and extracted the corresponding ele-ments for each semantic model.

2. We performed an extensive literature study re-garding existing approaches for visual mapping

4http://aloha-editor.org

Page 10: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

10 Khalili et al. / WYSIWYM

as well as approaches addressing the binding be-tween data and UI elements. If the approach wasexplicitly mentioning the binding composed ofUI elements and semantic model elements, weadded the binding to our mapping table.

3. We analyzed existing tools and applicationswhich were implicitly addressing the binding be-tween data and UI elements.

4. Finally, we followed a predictive approach. Weinvestigated additional UI elements which arelisted in existing HCI glossaries and carefully an-alyzed their potential to be connected to a seman-tic model element.

Although we deem the bindings to be fairly complete,new UI elements might be developed or additional datamodels (or variations of the ones considered) might ap-pear, in this case the bindings can be easily extended.

The following binding configurations are availableand refereed to from the cells of Figure 4:

– Defining a special border or background style(C1), text style (C2), image color effect (C4),beep sound (C5), bar style (C6), sketch (C7),draggable or droppable shape (C8), voice com-mand (C9), gesture (C10) or a related icon (C3)for each type.

– Progressive shading (C11) by defining continuousshades within a specific color scheme to distin-guish items in different levels of the hierarchy.

– Hierarchical bars (C12) by defining special stylesfor nested bars.

– Grouping by similar border or background style(C13), text style (C14), icons (C15) or image coloreffects (C16).

3.6. Helper Components

In order to facilitate, enhance and customize theWYSIWYM model, we utilize a set of helper com-ponents, which implement cross-cutting aspects. Ahelper component acts as an extension on top of thecore functionality of the WYSIWYM model. The fol-lowing components can be used to improve the qualityof a WYSIWYM UI depending on the requirementsdefined for a specific application domain:

– H1: Automation means the provision of facil-ities for automatic annotation of text, imagesand videos to reduce the need for human workand thereby facilitating the efficient annotationof large item collections. For example, users can

employ existing NLP services (e.g. named en-tity recognition, relationship extraction) for auto-matic text annotation.

– H2: Real-time tagging is an extension of automa-tion, which allows to create annotations proac-tively while the user is authoring a text, image orvideo. This will significantly increase the annota-tion speed and users are not distracted since theydo not have to interrupt their current authoringtask.

– H3: Recommendation means providing userswith pre-filled form fields, suggestions (e.g. forURIs, namespaces, properties), default values etc.These facilities simplify the authoring process,as they reduce the number of required user inter-actions. Moreover, they help preventing incom-plete or empty metadata. In order to leverageother user’s annotations as recommendations, ap-proaches like Paragraph Fingerprinting [8] canbe implemented.

– H4: Personalization and context-awareness de-scribes the ability of the UI to be configured ac-cording to users’ contexts, background knowl-edge and preferences. Instead of being static, apersonalized UI dynamically tailors its visual-ization, exploration and authoring functionalitiesbased on the user profile and context.

– H5: Collaboration and crowdsourcing enablescollaborative semantic authoring, where the au-thoring process can be shared among differ-ent authors at different locations. There are avast amounts of amateur and expert users whichare collaborating and contributing on the SocialWeb. Crowdsourcing harnesses the power of suchcrowds to significantly enhance and widen theresults of semantic content authoring and anno-tation. Generic approaches for exploiting single-user Web applications for shared editing [7] canbe employed in this context.

– H6: Accessibility means providing people withdisabilities and special needs with appropriateUIs. The underlying semantic model in a WYSI-WYM UI can allow alternatives or conditionalcontent in different modalities to be selectedbased on the type of the user disability and infor-mation need.

– H7: Multilinguality means supporting multiplelanguages in a WYSIWYM UI when visualizing,exploring or authoring the content.

Page 11: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 11

* If value is available in the text/subtitle.

No binding Partial binding Full binding

Tree-based

(e.g. Taxonomies)

Graph-based

(e.g. RDF)

Hypergraph-based

(e.g. Topic Maps)

Item

Item

typ

e

Item

-subitem

Item

pro

pert

y va

lue

Rela

ted I

tem

s

Insta

nce

Cla

ss

Rela

tionsh

ips b

etw

ee

n

entities

Literal property

values

Topic

Topic

typ

e

Topic

associa

tio

ns

Topic

role

in a

ssocia

tion Topic

Occurre

nces

Valu

e

Langu

ag

e t

ag

Data

typ

e

Valu

e

Data

typ

e

Structure

encoded in: UI categories UI techniques

Vis

ualizati

on

text

Highlighting

Framing and

segmentation (borders,

overlays, backgrounds)

C1 C11 C13 C1 C1

Text formatting (color,

font, size etc.) C2 C11 C14

C2

C2

Marking (appended icons) C3 C15 C3 C3

Associating Line connectors * * *

Arrow connectors * * *

Detail view Callouts

(infotips, tooltips, popups)

images

Highlighting

Framing and

segmentation (borders,

overlays, backgrounds)

C1 C11 C13 C1 C1

Image color effects C4 C11 C16 C4 C4

Marking (appended icons) C3 C15 C3 C3

Associating Line connectors

Arrow connectors

Detail view Callouts

(infotips, tooltips, popups)

videos

Highlighting

Framing and

segmentation (borders,

overlays, backgrounds)

C1 C11 C13 C1 C1

Image color effects C4 C11 C16 C4 C4

Marking (appended icons) C3 C15

C3

C3

Bleeping C5 C5 C5

Speech

Associating Line connectors * * *

Arrow connectors * * *

Detail view

Callouts

(infotips, tooltips, popups)

Subtitle

Exp

lora

tio

n

text

Zooming

Faceting

Bar layout C5 C12 C5 C5

Expandable callouts

images Zooming

Faceting

videos Faceting (excerpts)

Au

tho

rin

g

text, images,

videos

Form editing

Inline edit

Drawing C7 C7 C7 C7

Drag and drop C8 C8 C8 C8

Context menu

(Floating) Ribbon editing

Voice commands C9 C9 C9 C9

(Multi-Touch) Gestures C10 C10 C10 C10

Fig. 4. Possible bindings between user interface and semantic representation model elements.

Page 12: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

12 Khalili et al. / WYSIWYM

4. Implementation and Evaluation

In order to evaluate the WYSIWYM model, we im-plemented the two applications RDFaCE and Pharmer,which we present in the sequel.

RDFaCE. RDFaCE (RDFa Content Editor) is aWYSIWYM interface for semantic content author-ing. It is implemented on top of the TinyMCE richtext editor. RDFaCE extends the existing WYSIWYGuser interfaces to facilitate semantic authoring withinpopular CMSs, such as blogs, wikis and discussionforums. The RDFaCE implementation (cf. Figure 5,left) is open-source and available for download to-gether with an explanatory video and online demoat http://aksw.org/Projects/RDFaCE. RD-FaCE as a WYSIWYM instantiation can be describedusing the following hextuple:

– D: RDFa, Microdata5.– V: Framing using borders (C: special border color

defined for each type), Callouts using dynamictooltips.

– E: Faceting based on the type of entities.– T: Form editing, Context Menu, Ribbon editing.– H: Automation, Recommendation.– b: bindings defined in Figure 4.

RDFaCE comes with a special edition customizedfor Schema.org vacabularies6. In this version, dif-ferent color schemes are defined for vocabulariesat Schema.org. Users are able to create a subset ofSchema.org schemas for their intended domain andcustomize the colors for this subset. In this version,nested forms are dynamically generated from the se-lected schemas for authoring and editing of the anno-tations.

In order to evaluate RDFaCE usability, we con-ducted an experiment with 16 participants of the ISS-LOD 2011 summer school7. The user evaluation com-prised the following steps: First, some basic informa-tion about semantic content authoring along with ademo showcasing different RDFaCE features was pre-sented to the participants as a 3 minutes video. Then,participants were asked to use RDFaCE to annotatethree text snippets – a wiki article, a blog post and anews article. For each text snippet, a timeslot of five

5Microdata support is implemented in RDFaCE-Lite available athttp://rdface.aksw.org/lite

6http://rdface.aksw.org/new7Summer school on Linked Data: http://lod2.eu/

Article/ISSLOD2011

Usability Factor/Grade Poor Fair Neutral Good ExcellentFit for use 0% 12.50% 31.25% 43.75% 12.50%Ease of learning 0% 12.50% 50% 31.25% 6.25%Task efficiency 0% 0% 56.25% 37.50% 6.25%Ease of remembering 0% 0% 37.50% 50% 12.50%Subjective satisfaction 0% 18.75% 50% 25% 6.25%Understandability 6.25% 18.75% 31.25% 37.50% 6.25%

Table 1Usability evaluation results for RDFaCE.

Fig. 6. Usability evaluation results for Pharmer (0: Strongly dis-agree, 1: Disagree, 2: Neutral, 3: Agree, 4: Strongly agree).

minutes was available to use different features of RD-FaCE for annotating occurrences of persons, locationsand organizations with suitable entity references. Sub-sequently, a survey was presented to the participantswhere they were asked questions about their experi-ence while working with RDFaCE. Questions were tar-geting six factors of usability [12] namely Fit for use,Ease of learning, Task efficiency, Ease of remember-ing, Subjective satisfaction and Understandability. Re-sults of the survey are shown in Table 1. They indi-cate on average good to excellent usability for RD-FaCE. A majority of the users deem RDFaCE being fitfor use and its functionality easy to remember. Also,easy of learning and subjective satisfaction was wellrated by the participants. There was a slightly lower(but still above average) assessment of task efficiencyand understandability, which we attribute to the shorttime participants had for familiarizing themselves withRDFaCE and the quite comprehensive functionality,which includes automatic annotations, recommenda-tions and various WYSIWYM UI elements.

Pharmer. Pharmer is a WYSIWYM interface forthe authoring of semantically enriched electronic pre-scriptions. It enables physicians to embed drug-relatedmetadata into e-prescriptions thereby reducing themedical errors occurring in the prescriptions and in-creasing the awareness of the patients about the pre-

Page 13: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

Khalili et al. / WYSIWYM 13

Fig. 5. Screenshots of our two implemented WYSIWYM interfaces. Left: RDFaCE for semantic text authoring (T6 indicates the RDFaCE menubar, V1 – the framing of named entities in the text, V9 – a callout showing additional type information, T5 – a context menu for revisingannotations). Right: Pharmer for authoring of semantic prescriptions (V1 – highlighting of drugs through framing, V9 – additional informationabout a drug in a callout, T1/T2 combined form and inline editing of electronic prescriptions).

scribed drugs and drug consumption in general. Incontrast to database-oriented e-prescriptions, semanticprescriptions are easily exchangeable among other e-health systems without need to changing their relatedinfrastructure. The Pharmer implementation (cf. Fig-ure 5, right) is open-source and available for down-load together with an explanatory video and on-line demo8 at http://code.google.com/p/pharmer/. It is based on HTML5 contenteditable el-ement. Pharmer as a WYSIWYM instantiation is de-fined using the following hextuple:

– D: RDFa.– V: Framing using borders and background (C:

special background color defined for each type),Callouts using dynamic popups.

– E: Faceting based on the type of entities.– T: Form editing, Inline edit.– H: Automation, Real-time tagging, Recommen-

dation.– b: bindings defined in Figure 4.

In order to evaluate the usability of Pharmer, we per-formed a user study with 13 subjects. Subjects were 3physicians, 4 pharmacist, 3 pharmaceutical researchersand 3 students. We first showed them a 3-minute tuto-rial video of using different features of Pharmer thenasked each one to create a semantic prescription withPharmer. After finishing the task, we asked the partic-ipants to fill out a questionnaire. We used the SystemUsability Scale (SUS) [13] as a standardized, simple,ten item Likert scale-based questionnaire to grade theusability of Pharmer. SUS yields a single number in the

8http://bitili.com/pharmer

range of 0 to 100 which represents a composite mea-sure of the overall usability of the system. The resultsof our survey (cf. Figure 6) showed a mean usabilityscore of 75 for Pharmer WYSIWYM interface whichindicates a good level of usability. Participants particu-larly liked the integration of functionality and the easeof learning and use. The confidence in using the sys-tem was slightly lower, which we again attribute to theshort learning phase and diverse functionality.

5. Conclusions

Bridging the gap between unstructured and seman-tic content is a crucial aspect for the ultimate suc-cess of semantic technologies. With the WYSIWYMconcept we presented in this article an approach forintegrated visualization, exploration and authoring ofunstructured and semantic content. The WYSIWYMmodel binds elements of a knowledge representationformalism (or data model) to a set of suitable UI el-ements for visualization, exploration and authoring.Based on such a declarative binding mechanism, weaim to increase the flexibility, reusability and develop-ment efficiency of semantics-rich user interfaces

We deem this work as a first step in a larger researchagenda aiming at improving the usability of seman-tic user interfaces, while retaining semantic richnessand expressivity. In future work we envision to adopt amodel-driven approach to enable automatic implemen-tation of WYSIWYM interfaces by user-defined pref-erences. This will help to reuse, re-purpose and chore-ograph WYSIWYM UI elements to accommodate theneeds of dynamically evolving information structures

Page 14: Semantic Web 1 (2013) 1–14 IOS Press WYSIWYM ...Semantic Web 1 (2013) 1–14 1 IOS Press WYSIWYM – Integrated Visualization, Exploration and Authoring of Un-structured and Semantic

14 Khalili et al. / WYSIWYM

and ubiquitous interfaces. We also aim to bootstrap anecosystem of WYSIWYM instances and UI elementsto support structure encoded in different modalities,such as images and videos. Creating live and context-sensitive WYSIWYM interfaces which can be gener-ated on-the-fly based on the ranking of available UIelements is another promising research venue.

Acknowledgments

We would like to thank our colleagues from theAKSW research group for their helpful comments andinspiring discussions during the development of theWYSIWYM model. This work was partially supportedby a grant from the European Union’s 7th FrameworkProgramme provided for the project LOD2 (GA no.257943).

References

[1] One Click Annotation. volume 699 of CEUR Workshop Pro-ceedings, February 2010.

[2] A. Ankolekar, M. Krötzsch, T. Tran, and D. Vrandecic. Thetwo cultures: mashing up web 2.0 and the semantic web. InWWW 2007, pages 825–834.

[3] G. Burel, A. E. Cano1, and V. Lanfranchi. Ozone browser:Augmenting the web with semantic overlays. volume 449 ofCEUR WS Proceedings, June 2009.

[4] A.-S. Dadzie and M. Rowe. Approaches to visualising linkeddata: A survey. Semantic Web, 2(2):89–124, 2011.

[5] L. Deligiannidis, K. J. Kochut, and A. P. Sheth. RDF dataexploration and visualization. In CIMS 2007, pages 39–46.ACM, 2007.

[6] H. Haller and A. Abecker. imapping: a zooming user interfaceapproach for personal and semantic knowledge management.SIGWEB Newsl., pages 4:1–4:10, Sept. 2010.

[7] M. Heinrich, F. Lehmann, T. Springer, and M. Gaedke. Ex-ploiting single-user web applications for shared editing: ageneric transformation approach. In WWW 2012, pages 1057–1066. ACM, 2012.

[8] L. Hong and E. H. Chi. Annotate once, appear anywhere: col-lective foraging for snippets of interest using paragraph finger-printing. CHI ’09, pages 1791–1794. ACM.

[9] D. R. Karger, S. Ostler, and R. Lee. The web page as a wysi-wyg end-user customizable database-backed information man-agement application. In UIST 2009, pages 257–260. ACM,2009.

[10] A. Khalili and S. Auer. User interfaces for semantic authoringof textual content: A systematic literature review. 2012.

[11] A. Klebeck, S. Hellmann, C. Ehrlich, and S. Auer. Ontosfeeder- a versatile semantic context provider for web content author-ing. In ISWC 2011, volume 6644 of LNCS, pages 456–460.

[12] S. Lauesen. User Interface Design: A Software EngineeringPerspective. Addison Wesley, Feb. 2005.

[13] J. Lewis and J. Sauro. The Factor Structure of the SystemUsability Scale. In Human Centered Design, volume 5619 ofLNCS, pages 94–103. 2009.

[14] J. Lin, M. Thomsen, and J. A. Landay. A visual languagefor sketching large and complex interactive designs. CHI ’02,pages 307–314. ACM.

[15] A. Loecken, T. Hesselmann, M. Pielot, N. Henze, and S. Boll.User-centred process for the definition of free-hand gesturesapplied to controlling music playback. Multimedia Syst.,18(1):15–31, 2012.

[16] V. Lopez, V. Uren, M. Sabou, and E. Motta. Is question answer-ing fit for the semantic web? a survey. Semantic Web ? Inter-operability, Usability, Applicability, 2(2):125–155, September2011.

[17] R. H. M. Luczak-Roesch. Linked data authoring fornon-experts. In WWW WS on Linked Data on the Web(LDOW2009), 2009.

[18] W. Muller, I. Rojas, A. Eberhart, P. Haase, and M. Schmidt. A-r-e: The author-review-execute environment. Procedia Com-puter Science, 4:627 – 636, 2011. ICCS 2011.

[19] B. A. Myers. A brief history of human-computer interactiontechnology. interactions, 5(2):44–54, 1998.

[20] S. Oviatt, P. Cohen, L. Wu, J. Vergo, L. Duncan, B. Suhm,J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson,and D. Ferro. Designing the user interface for multimodalspeech and pen-based gesture applications: state-of-the-art sys-tems and future research directions. Hum.-Comput. Interact.,15(4):263–322, Dec. 2000.

[21] E. Pietriga, C. Bizer, D. R. Karger, and R. Lee. Fresnel: Abrowser-independent presentation vocabulary for rdf. In ISWC,LNCS, pages 158–171. Springer, 2006.

[22] R. Power, R. Power, D. Scott, D. Scott, R. Evans, and R. Evans.What you see is what you meant: direct knowledge editing withnatural language feedback, 1998.

[23] M. Sah, W. Hall, N. M. Gibbins, and D. C. D. Roure. Sem-port - a personalized semantic portal. In 18th ACM Conf. onHypertext and Hypermedia, pages 31–32, 2007.

[24] C. Sauer. What you see is wiki – questioning WYSIWYG inthe Internet age. In Proceedings of Wikimania 2006, 2006.

[25] B. Shneiderman. Creating creativity: user interfaces for sup-porting innovation. ACM Trans. Comput.-Hum. Interact.,7(1):114–138, Mar. 2000.

[26] J. Spiesser and L. Kitchen. Optimization of html automaticallygenerated by wysiwyg programs. In WWW 2004, pages 355–364.

[27] D. Tunkelang. Faceted Search (Synthesis Lectures on Informa-tion Concepts, Retrieval, and Services). Morgan and ClaypoolPublishers, June 2009.