Architectural, Spatial, and Navigational Metaphors as Design
Points for Collaboration
John “Boz” Handy-Bosma, Ph.D.Chief Architect for Collaboration, IBM Office of the CIO
For KM Chicago, May 8, 2012• May 8, 2012
Credit:Ogivly
Credit: Fellowship of the Rich, Flickr
MOSFET Architecture, scaling recipes, and Moore’s Law
5
Dimensions
Voltages
Doping levels
K Circuit delay:
Power/circuit:
Power delay product:
K
K
K
Decreased by:If scaled by Constant
Recipes in Dennard's Scaling Theory
Factors maintained in constant ratio
6
Consistent inputs in proportion Predictable outcomes on figures of merit
Figure of merit: a quantity characterizing performance
Used for benchmarking and comparisons
e.g.; clock speed in CPU, wicking factor in fabrics
Consistent measurement
Related figures held to constant performance, not degraded
Not classical power laws
Multiple factors
Relationships among factors
No claims about development in relation to time
How to achieve ratios is not addressed
Where to look for scaling principles?
(Answer: where things are clogged or crowded)
7
An approach Identify practical recipes for improving Collaboration and Search. Use these as input to decisions on architecture and design.
Factors maintained in constant ratio(roughly)
8
Consistent improvement in specific factors
- in proportion - Predictable outcomes in figures of merit
Recipes enable balanced improvement in search, collaboration, and metrics
•Precision and recall
•Content and metadata
•Adoption and Use
Experimentation allows for measurement and improvement on key measures – but it is important to identify potential trade-offs in figures of merit resulting from technical and social factors:
Serial navigation (similar to Fitt's Law)
Impact of follower models on signal-to-noise ratio of communication
Multiple factors:
• Wayfinding (e.g.; navigating, searching, sorting, filtering)
• Information production (e.g.; quality and quantity of authoring, tagging, publishing)
• Bidirectional (e.g.; reciprocal networking among participants)
Relationships: mutually reinforcing, mutually impinging, exponential
Factors are expressed via specific solutions as used in the field
Wayfinding and Isovist: How is search relevance measured?
9
Key Term Definition
Relevance A subjective measure of whether a document in a search result answers a query
Precision A measure of the percentage of documents in a result list that answer a query
Recall A measure of the percentage of documents in a result list relevative to all documents in a collection
Pertinence A subjective measure of whether a document in a search result answers a query (in light of previous knowledge or experience)
Aboutness The subjects and topics conveyed by a document or query
Isovist Pertinent items visible | not visible at any given point in a navigational sequence
Test for performance using known corpora and results (e.g.; Trec)
Typically uses a single query and response, rather than a series of interactions between users and search engine
Geared toward top of results list
But traditional approaches are not sufficient to measure relevance of results, where relevance is determined by social interaction and collaboration outcomes!
Example: What aspects of metadata facilitate collaboration?
11
Collaboration capability Metadata features
Integrating disparate bodies of content from multiple sources / communities
- Incorporate global and local extensions to vocabulary-- Query modification to allow lateral navigation-- Matching on shared interests
Team Coordination - Content previews, review and approval, collaborative workflow- Tagging at group level- Metadata suggestions
Positive network effects from sharing in social channels
- Social Tagging and Bookmarking- Rankings and ratings- Clickstream analysis for ranking
Knowledge Elicitation - Query expansion a) Conditional metadata, b) Did you mean?- Tag notifications
Facilitate collaboration among disparate language comunities
- Unique and mapped display values; e.g.; Social Authority
Example: when is metadata search helpful to collaboration?
When metadata search?✔ Multiple set membership for searchables✔ Sufficient completeness and quality of
classification scheme✔ Adequate accuracy of categorization✔ Leads to improved effective precision and
time to find
12
When not metadata search?
✗ Precise results can be obtained without metadata
✗ When metadata leads to undesirable phenomena such as conjunction search, serial navigation, or error propagation
Often assumed, but questionable:
? That a single large corpus is to be searched
? That metadata require hierarchical taxonomy with many classifiers
? That agreement on taxonomy is needed
? That searches are for documents (as opposed to collections of documents, parts of documents, people, facts, etc.)
? That metadata operations only involve “anding” on attributes to find instances
Measuring effective precision of metadata search
• Log sequence of user actions in a search session (queries, metadata selections, links)
• Work backward from a known result (document click, download, print, tag, bookmark, notify, rate, exit)
• Establish influence of each step in sequence on ranking of document(s) that elicited that result (via rankings in results list)
• Query by segments of interest using aggregated data
13
Privacy-preserving cookies
Clickstream data
Searchqueries
Clickstream repository
Segmentation database
Sequence
Survey and ratings repositories
Analysis
Survey and ratings info
Example: Is stemming improving the search results? Method: A-B tests using stemming, sample measures of search precision
Optimization Cycle2. Observe
practice
6. Measure
Outcomes
7. Transition Variables to
Constants
4. Propose New
Variables
5. Build new
configuration
3. Evaluate
Bottlenecks
1. Configure
Practices
and Tools