web why google isnt making us stupid or smart

7
74 The Hedgehog Review: Spring 2012 (Volume 14 | Issue 1) Issue TOC >> Print Article >> Download Article PDF >> Subscribe >> L THE HEDGEHOG REVIEW: VOL. 14, NO. 1 (SPRING 2012) Why Google Isn’t Making Us Stupid…or Smart Chad Wellmon Reprinted from The Hedgehog Review 14.1 (Spring 2012). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details. ast year The Economist published a special report not on the global financial crisis or the polarization of the American electorate, but on the era of big data. Article after article cited one big number after another to bolster the claim that we live in an age of information superabundance. The data are impressive: 300 billion emails, 200 million tweets, and 2.5 billion text messages course through our digital networks every day, and, if these numbers were not staggering enough, scientists are reportedly awash in even more information. This past January astronomers surveying the sky with the Sloan telescope in New Mexico released over 49.5 terabytes of information—a mass of images and measurements—in one data drop. The Large Hadron Collider at CERN (the European Organization for Nuclear Research), however, produces almost that much information per second. Last year alone, the world’s information base is estimated to have doubled every eleven hours. Just a decade ago, computer professionals spoke of kilobytes and megabytes. Today they talk of the terabyte, the petabyte, the exabyte, the zettabyte, and now the yottabyte, each a thousand times bigger than the last. Some see this as information abundance, others as information overload. The advent of digital information and with it the era of big data allows geneticists to decode the human genome, humanists to search entire bodies of literature, and businesses to spot economic trends. But it is also creating for many the sense that we are being overwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyte of information—a one with 21 zeros after it? From a more embodied, human perspective, these tremendous scales of information are rather meaningless. We do not experience information as pure data, be it a byte or a yottabyte, but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. However impressive these astronomical scales of information may be, our contemporary awe and increasing worry about all this data obscures the ways in which we actually engage it and the world of which it and we are a part. All of the chatter about information superabundance and overload tends not only to marginalize human persons, but also to render technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to an infinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies. The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world. In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable. Two Narratives The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about how we experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquid fabric of interconnected words and ideas,” a form of knowledge without distinctions or differences. Unlike other technological innovations, like print, which was limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible to anyone and everyone.” Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us. These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era. On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates a less intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such critics charge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.” The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are 1 2 3 4 5 6 7 INSTITUTE for ADVANCED STUDIES in CULTURE | UVa Home About Issues Contact Order http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php 1 of 7

Upload: belacqua16

Post on 19-Jul-2016

5 views

Category:

Documents


0 download

DESCRIPTION

Web Why Google Isnt Making Us Stupid or Smart

TRANSCRIPT

74

The HedgehogReview: Spring2012

(Volume 14 | Issue 1)

Issue TOC >>

Print Article >>

Download ArticlePDF >>

Subscribe >>

L

T H E H E D G E H O G R E V I E W : V O L . 1 4 , N O . 1 ( S P R I N G 2 0 1 2 )

Why Google Isn’t Making Us Stupid…or Smart

Chad Wellmon

Reprinted from The Hedgehog Review 14.1 (Spring 2012). This essay may not be resold, reprinted, or redistributed for compensation of any kind withoutprior written permission. Please contact The Hedgehog Review for further details.

ast year The Economist published a special report not on the global financial crisis or the polarization of theAmerican electorate, but on the era of big data. Article after article cited one big number after another tobolster the claim that we live in an age of information superabundance. The data are impressive: 300 billion

emails, 200 million tweets, and 2.5 billion text messages course through our digital networks every day, and, ifthese numbers were not staggering enough, scientists are reportedly awash in even more information. This pastJanuary astronomers surveying the sky with the Sloan telescope in New Mexico released over 49.5 terabytes ofinformation—a mass of images and measurements—in one data drop. The Large Hadron Collider at CERN (theEuropean Organization for Nuclear Research), however, produces almost that much information per second. Lastyear alone, the world’s information base is estimated to have doubled every eleven hours. Just a decade ago,computer professionals spoke of kilobytes and megabytes. Today they talk of the terabyte, the petabyte, theexabyte, the zettabyte, and now the yottabyte, each a thousand times bigger than the last.

Some see this as information abundance, others as information overload. The advent of digital information and withit the era of big data allows geneticists to decode the human genome, humanists to search entire bodies ofliterature, and businesses to spot economic trends. But it is also creating for many the sense that we are beingoverwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyteof information—a one with 21 zeros after it? From a more embodied, human perspective, these tremendous scalesof information are rather meaningless. We do not experience information as pure data, be it a byte or a yottabyte,but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. Howeverimpressive these astronomical scales of information may be, our contemporary awe and increasing worry about allthis data obscures the ways in which we actually engage it and the world of which it and we are a part. All of thechatter about information superabundance and overload tends not only to marginalize human persons, but also torender technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to aninfinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsivetalk about information overload can isolate and abstract digital technology from society, human persons, and ourbroader culture. We have become distracted by all the data and inarticulate about our digital technologies.

The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after theyottabyte, but in cultivating contact with an increasingly technologically formed world. In order to understand howour lives are already deeply formed by technology, we need to consider information not only in the abstract terms ofterrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engagethe world come in turn to form us? What do these technologies that are of our own making and irreducible elementsof our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducingtechnology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural criticsrecently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable.

Two Narratives

The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about howwe experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power ofFacebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquidfabric of interconnected words and ideas,” a form of knowledge without distinctions or differences. Unlike other technological innovations, like print, whichwas limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible toanyone and everyone.” Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information.Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarterand ultimately liberate us. These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what wereonce the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era.

On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are sufferingunder a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumableline or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manageall of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates aless intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such criticscharge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chippingaway my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly movingstream of particles.” The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are

1

2

3

4

5

6

7

INSTITUTE for ADVANCED STUDIES inCULTURE | UVa

Home About Issues Contact Order

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

1 of 7

degrading our ability “to pay sustained attention,” to read in depth, to reflect, to remember. For Carr and many others like him, true knowledge is deep,and its depth is proportional to the intensity of our attentiveness. In our digital world that encourages quantity over quality, Google is making us stupid.

Each of these narratives points to real changes in how technology impacts humans. Both the scale and the acceleration of information production anddissemination in our digital age are unique. Google, like every technology before it, may well be part of broader changes in the ways we think andexperience the world. Both narratives, however, make two basic mistakes.

First, they imagine our information age to be unprecedented, but information explosions and the utopian and apocalyptic pronouncements that accompanythem are an old concern. The emergence of every new information technology brings with it new methods and modes for storing and transmitting evermore information, and these technologies deeply impact the ways in which humans interact with the world. Both the optimism of technophiles who predictthe emergence of a digital “liquid” intelligence and the pessimism of those who fear that Google is “making us stupid” echo historical hopes and complaintsabout large amounts of information.

Second, both narratives make a key conceptual error by isolating the causal effects of technology. Technologies, be it the printed book or Google, do notmake us unboundedly free or unflaggingly stupid. Such a sharp dichotomy between humans and technology simplifies the complex, unpredictable, andthoroughly historical ways in which humans and technologies interact and form each other. Simple claims about the effects of technology obscure basicassumptions, for good or bad, about technology as an independent cause that eclipses causes of other kinds. They assume the effects of technology can beeasily isolated and abstracted from their social and historical contexts.

Instead of thinking in such dichotomies or worrying about all of those impending yottabytes, we might consider a perhaps simple but oftentimes overlookedfact: we access, use, and engage information through technologies that help us select, filter, and delimit. Web browsers, hyperlinks, blogs, onlinenewspapers, computational algorithms, rss feeds, Facebook, and Google help us turn all of those terrabytes of data into something more useful andparticular, that is, something that can be remade and repurposed by an embodied human person. These now ubiquitous technologies help us filter theessential from the excess and search for the needle in the haystack, and in so doing they have become central mediums for our experience of the world.

In this sense, technology is neither an abstract flood of data nor a simple machine-like appendage subordinate to human intentions, but instead the verymanner in which humans engage the world. To celebrate the Web, or any other technology, as inherently edifying or stultifying is to ignore its more humanscale: our individual access to this imagined expanse of pure information is made possible by technologies that are constructed, designed, and constantlytweaked by human decisions and experiences. These technologies do not exist independently of the human persons who design and use them. Likewise, tosuggest that Google is making us stupid is to ignore the historical fact that over time technologies have had an effect on how we think, but in ways that aremuch more complex and not at all reducible to simple statements like “Google is making us stupid.”

Think of it this way: the Web in its entirety—just like those terrabytes of information that we imagine weighing down upon us—is inaccessible to theill-equipped person. Digital technologies make the Web accessible by making it seem much smaller and more manageable than we imagine it to be. TheWeb does not exist. In this sense, the history of information overload is instructive less for what it teaches us about the quantity of information than what itteaches us about how the technologies that we design to engage the world come in turn to shape us. The specific technologies developed to manageinformation can give us insight into how we organize, produce, and distribute knowledge—that is, the history of information overload is a history of how weknow what we know. It is not only the history of data, books, and the tools used to cope with them. It is also a history of ourselves and of the environmentwithin which we make and in turn are made by technologies.

In the following sections, I put our information age in historical context in an effort to demonstrate that technology’s impact on the human is bothprecedented and constitutive of new forms of life, new norms, and new cultures. The concluding sections focus on Google in particular and consider how itis impacting our very notion of what it is to be human in the digital age. Carr and other critics of the ways we have come to interact with our digitaltechnologies have good reason to be concerned, but, as I hope to show, for rather different reasons than they might think. The core issue concerns notparticular modes of accommodating new technologies—nifty advice on dealing with email or limiting screen time—but our very conception of therelationship between the human and technology.

Too Many Books

As historian Ann Blair has recently demonstrated, our contemporary worries about information overload resonate with historical complaints about “toomany books.” Historical analogues afford us insight not only into the history of particular anxieties, but also into the ways humans have always beenimpacted by their own technologies. These complaints have their biblical antecedents: Ecclesiastes 12:12, “Of making books there is no end”; their classicalones: Seneca, “the abundance of books is a distraction” ; and their early modern ones: Leibniz, the “horrible mass of books keeps growing.” After theinvention of the printing press around 1450 and the attendant drop in book prices, according to some estimates by as much as 80 percent, thesecomplaints took on new meaning. As the German philosopher and critic Johann Gottfried Herder put it in the late eighteenth century, the printing press“gave wings” to paper.

Complaints about too many books gained particular urgency over the course of the eighteenth century when the book market exploded, especially inEngland, France, and Germany. Whereas today we imagine ourselves to be engulfed by a flood of digital data, late eighteenth-century German readers, forexample, imagined themselves to have been infested by a plague of books [Bücherseuche]. Books circulated like contagions through the reading public.These anxieties corresponded to a rapid increase in new print titles in the last third of the eighteenth century, an increase of about 150 percent from 1770to 1800 alone.

Similar to contemporary worries that Google and Wikipedia are making us stupid, these eighteenth-century complaints about “excess” were not merelydescriptive. In 1702 the jurist and philosopher Christian Thomasius laid out some of the normative concerns that would gain increasing traction over thecourse of the century. He described the writing and business of books as a

kind of Epidemic disease, which hath afflicted Europe for a long time, and is more fit to fill warehouses of booksellers, than the libraries of the Learned. Any onemay understand this to be meant of that itching desire to write books, which people are troubled with at this time. Heretofore none but the learned, or at least suchas ought to be accounted so, meddled with this subject, but now-a-days there is nothing more common, it extends itself through all professions, so that now almostthe very Coblers, and Women who can scarce read, are ambitious to appear in print, and then we may see them carrying their books from door to door, as aHawker does his comb cases, pins and laces.

The emergence of a print book market lowered the bar of entry for authors and gradually began to render traditional filters and constraints on theproduction of books increasingly inadequate. The perception of an excess of books was motivated by a more basic assumption about who should andshould not write them.

At the end of the century, even book dealers had grown weary of a market that seemed to be growing out of control. In his 1795 screed, Appeal to MyNation: On the Plague of German Books, the German bookseller and publisher Johann Georg Heinzmann lamented that “no nation has printed so much asthe Germans.” For Heinzmann, late eighteenth-century German readers suffered under a “reign of books” in which they were the unwitting pawns of

8 9

10

11

12

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

2 of 7

ideas that were not their own. Giving this broad cultural anxiety a philosophical frame, and beating Carr to the punch by more than two centuries,Immanuel Kant complained that such an overabundance of books encouraged people to “read a lot” and “superficially.” Extensive reading not onlyfostered bad reading habits, but also caused a more general pathological condition, Belesenheit [the quality of being well-read], because it exposed readersto the great “waste” [Verderb] of books. It cultivated uncritical thought.

Like contemporary worries about “excess,” these were fundamentally normative. They made particular claims not only about what was good or bad aboutprint, but about what constituted “true” knowledge. First, they presumed some unstated yet normative level of information or, in the case of aBücherseuche, some normative number of books. There are too many books; there is too much data. But compared to what? Second, such lamentspresumed the normative value of particular practices and technologies for dealing with all of these books and all of this information. Every complaint aboutexcess was followed by a proposal on how to fix the apparent problem. To insist that there are too many books was to insist that there were too manybooks to be read or dealt with in a particular way and thus to assume the normative value of one form of reading over another.

Enlightenment Reading Technologies

Not so dissimilar to contemporary readers with their digital tools, eighteenth-century German readers had a range of technologies and methods at theirdisposal for dealing with the proliferation of print—dictionaries, bibliographies, reviews, note-taking, encyclopedias, marginalia, commonplace books,footnotes. These technologies made the increasing amounts of print more manageable by helping readers to select, summarize, and organize anever-increasing store of information. The sheer range of technologies demonstrates that humans usually deal with information overload through creativeand sometimes surprising solutions that blur the line between humans and technology.

By the late seventeenth and early eighteenth centuries, European readers dealt with the influx of new titles and the lack of funds and time to read them allby creating virtual libraries called bibliotheca. At first these printed texts were simply listings of books that had been published or displayed at book fairs,but over time they began to include short reviews and summaries intended to guide the collector, scholar, and amateur in their choice and reading ofbooks. They also allowed eighteenth-century readers to avoid reading entire books by providing summaries of individual books.

Eighteenth-century readers also made use of an increasing array of encyclopedias. In contrast to their early modern Latin predecessors that sought tosummarize the most significant branches of established knowledge (designed to present an enkuklios paideia, or common knowledge), these Enlightenmentencyclopedias were produced and sold as reference books that disseminated information more widely and efficiently by compiling, selecting, andsummarizing more specialized and, above all, new knowledge. It made knowledge more general and common by sifting and constraining the purview ofknowledge.

Similarly, compilations, which date from at least the early modern period, employed cut and paste technologies, rather than summarization, to select,collect, and distribute the best passages from an array of books. A related search technology, the biblical concordance—the first dates back to1247—indexed every word of the Bible and facilitated its broader use for sermons and, after its translations into the vernacular, even broader audiences.Similarly, indexes became increasingly popular and big selling points of printed texts by the sixteenth century.

All of these technologies facilitated a consultative reading that allowed a text to be accessed in parts instead of reading a text straight through frombeginning to end. By the early eighteenth century, there was even a science devoted to organizing and accounting for all of these technologies andbooks: historia literaria. It produced books about books. The technologies and methods for organizing and managing all of these books and informationwere embedded into other forms and even other sciences.

All of these devices and technologies provided shortcuts and methods for filtering and searching the mass of printed or scribal texts. They weretechnologies for managing two perennially precious resources: money (books and manuscripts were expensive) and time (it takes a lot of time to readevery word).

While many overwhelmed readers welcomed these techniques and technologies, some, especially by the late eighteenth century, began to complain thatthey led to a derivative, second-hand form of knowledge. One of Kant’s students and a key figure of the German Enlightenment, J. G. Herder, mocked theFrench for their attempts to deal with such a proliferation of print through encyclopedias:

Now encyclopedias are being made, even Diderot and D’Alembert have lowered themselves to this. And that book that is a triumph for the French is for us the firstsign of their decline. They have nothing to write and, thus, produce Abregés, vocabularies, esprits, encyclopedias—the original works fall away.

Echoing contemporary concerns about how our reliance on Google and Wikipedia might lead to superficial forms of knowledge, Herder worried that thesetechnologies reduced knowledge to discrete units of information. Journals reduced entire books to a paragraph or blurb; encyclopedias aggregated hugeswaths of information into a deceptively simple form; compilations separated readers from the original texts.

By the mid-eighteenth century, the word “polymath”—previously used positively to describe a learned person—became synonymous with dilettante, onewho merely skimmed, aggregated, and heaped together mounds of information but never knew much at all. In sum, encyclopedias and the like hadreduced the Enlightenment project, these critics claimed, to mere information management. At stake was the definition of “true” knowledge. Over thecourse of the eighteenth century, German thinkers and authors began to make a normative distinction between what they termed Gelehrsamkeit andWissen, between mere pedantry and true knowledge.

As this brief history of Enlightenment information technologies suggests, to claim that a particular technology has one unique effect, either positive ornegative, is to reduce both historically and conceptually the complex causal nexus within which humans and technologies interact and shape each other.Carr’s recent and broadly well-received arguments wondering if Google makes us stupid, for example, rely on a historical parallel that he draws with print.He claims that the invention of printing “caused a more intensive” form of reading and, by extrapolation, print caused a more reflective form of thought—words on a page focused the reader.

Historically speaking, this is hyperbolic techno-determinism. Carr assumes that technologies simply “determine our situation,” independent of humanpersons, but these very technologies, methods, and media emerge from particular historical situations with their own complex of factors. Carr relies onquick allusions to historians of print to bolster his case and inoculate himself from counter-arguments, but the historian of print to whom he appeals,Elizabeth L. Eisenstein, warns that “efforts to summarize changes wrought by printing in any simple or single formula are likely to lead us astray.”

Arguments like Carr’s—and I focus on him because he has become the vocal advocate of this view—also tend to ignore the fact that, historically, printfacilitated a range of reading habits and styles. Francis Bacon, himself prone to condemning printed books, laid out at least three ways to read books:“Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.” As a host of scholars have demonstrated of late,different ways of reading co-existed in the print era. Extensive or consultative forms of reading—those that Carr might describe as distracted orunfocused—existed alongside more intensive forms of reading—those that he might describe as deep, careful, prolonged engagements with particular textsin the Enlightenment. Eighteenth-century German Pietists read the Bible very closely, but they also consistently consulted Bible concordances and Latinencyclopedias. Even the form of intensive reading held up today as a dying practice, novel reading, was often derided in the eighteenth century asweakening the memory and leading to “habitual distraction,” as Kant put it. It was thought especially dangerous to women who, according to Kant, were

13

14

15

16

17

18

19

20

21

22

23

24

25

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

3 of 7

already prone to such lesser forms of thought. In short, print did not cause one particular form of reading; instead, it facilitated a range of ever-newertechnologies, methods, and innovations that were deeply interwoven with new forms of human life and new ways of experiencing the world.

The problem with suggestions that Google makes us stupid, smart, or whatever else we might imagine, however, is not just their historical myopia. Suchreductions elide the fact that Google and print technology do not operate independently of the humans who design, interact with, and constantly modifythem, just as humans do not exist independently of technologies. By focusing on technology’s capacity to determine the human (by insisting that Googlemakes us stupid, that print makes us deeper readers), we risk losing sight of just how deeply our own agency is wrapped up with technology. We forego amore anthropological perspective from which we can observe “the activity of situated people trying to solve local problems.” To emphasize a single anddirect causal link between technology and a particular form of thought is to isolate technology from the very forms of life with which it is bound up.

Considering our anxieties and utopic fantasies about technology or information superabundance in a more historical light is one way to mitigate thistendency and gain some conceptual clarity. Thus far I have offered some very general historical and conceptual observations about technology and thehistory of information overload. In the next sections, I focus on one particular historical technology—the footnote—and its afterlife in our contemporarydigital world.

The Footnote: From Kant to Google

Today our most common tools for organizing knowledge are algorithms and data structures. We often imagine them to be unprecedented. But Google’ssearch engines take advantage of a rather old technology—that most academic and seemingly useless thing called the footnote. Although Google continuesto tweak and improve its search engines, the data that continue to fuel them are hyperlinks, those blue colored bits of texts on the Web that if clicked willtake you to another page. They are the sinews of the Web, which is simply the totality of all hyperlinks. The World Wide Web emerged in part from theefforts of a British physicist working at CERN in the early 1990s, Tim Berners-Lee. Frustrated by the confusion that resulted from a proliferation ofcomputers, each with its own codes and formats, he wondered how they could all be connected. He took advantage of the fact that regardless of theparticular code, every computer had documents. He went on to work on codes for html, URLs, and http that could link these documents regardless of thedifferences among the computers themselves. It turns out that these digital hyperlinks have a revealing historical and conceptual antecedent in theEnlightenment footnote.

The modern hyperlink and the Enlightenment footnote share a logic that is grounded in assumptions about the text-based nature of knowledge. Bothassume that documents, the printed texts of the eighteenth century or the digitized ones of the twenty-first century, are the basis of knowledge. And theseassumptions have come to dominate not only the way we search the web, but also the ways we interact with our digital world. The history of the footnoteis a curious but perspicuous example, then, of how normative, cultural assumptions and values become embedded in technology.

Footnotes have a long history in biblical commentaries and medieval annotations. Whereas these scriptural commentaries simply “buttressed a text” thatderived its ultimate authority from some divine source, Enlightenment footnotes pointed to other Enlightenment texts. They highlighted the fact thatthese texts were precisely not divine or transcendent. They located the work in a particular time and place. The modern footnote anchors a text andgrounds its authority not in some transcendent realm, but in the footnotes themselves. Unlike biblical commentaries, modern footnotes “seek to show thatthe work they support claims authority and solidity from the historical conditions of its creation.” The Enlightenment’s citational logic is fundamentallyself-referential and recursive—that is, the criteria for judgment are always given by the system of texts themselves and not something external, like divineor ecclesial authority. The value and authority of one text is established by the fact that other texts point to it. The more footnotes that point to a particulartext, the more authoritative that text becomes by dent of the fact that other texts point to it.

Online newspapers and blogs are central to our public debates, but printed journals were the central medium of the Enlightenment. One of the mostfamous German journals was the Berlinische Monatsschrift published between 1783 and 1811. It published the most important articles to a broad andincreasingly diverse reading public. In its first issue, the editors wrote that the journal sought “news from the entire empire [Reich] of the sciences”—ethnographic reports, biographical reports about interesting people, translations, excerpts from texts from foreign lands. The editors envisioned thejournal as a central node in the broader world of information exchange and circulation. This editorial plan was then carried out according to a citational logicthat structured the entire journal.

The journal’s first essay, “On the Origin of the Fable of the Woman in White,” centers on a fable “drawn” from another text of 1723. This citation is followedby another one citing another history, published in 1753, on the origins of the fable. The rest of the essay cites “various language scholars and scholars ofantiquity” [Sprach- und Alterthumsforscher] to authorize its own claims. The citations and footnotes that fill the margins and the parenthetical directivesthat are peppered throughout the main text not only give authority to the broader argument and narrative, but also create a web of interconnected texts.

Even Kant’s famous essay on the question of Enlightenment, which appeared in the same journal in 1784, begins not with a philosophical argument, butwith a footnote directly underneath the title, directing the reader to a footnote from another essay published in December of 1783 that posed the originalquestion: “What is Enlightenment?” This essay in turn directs readers to yet another article on Enlightenment from September of that year. The traditionalunderstanding of Enlightenment is based on the self-legislation and autonomy of reason, but all of these footnotes suggest that Enlightenment reason wasbound up with print technology from the beginning.

One of the central mediums of the Enlightenment, journals, operated according to a citational logic. The authority, relevance, and value of a text wasundergirded—both conceptually and visually—by an array of footnotes that pointed to other texts. Like our contemporary hyperlinks, these citationsinterrupted the flow of reading—marked as they often were by a big asterisk or a “see page 516.” Perhaps most importantly, however, all of thesefootnotes and citations pointed not to a single divinely inspired or authoritative text, but to a much broader network of texts. Footnotes and citations werethe pointing sinews that connected and coordinated an abundance of print. By the end of the eighteenth century, there even emerged a term for all of thispointing: the language of books [Büchersprache]. Books were imagined to speak to one another because they constantly pointed to and cited one another.The possibility of knowledge and interaction with the broader world in the Enlightenment rested not only on the pensive, autonomous philosopher, but alsowithin the links from book to book, essay to essay.

Google’s Citational Logic

The founders of Google, Larry Page and Sergey Brin, modeled their revolutionary search engine on the citational logic of the footnote and thus transposedmany of its assumptions about knowledge and technology into a digital medium. Google “organizes the world’s information,” as their motto goes, bymodeling the hyperlink structure inherent in the document-based Web; that is, it produces search results based on all of the pointing between digital textsthat hyperlinks do. Taking advantage of the enormous scaling power afforded by digitization, Google, however, takes this citational logic to both aconceptual and practical extreme. Whereas the footnotes in Enlightenment texts were always bound to particular pages, Google uses each hyperlink as adata point for its algorithms and creates a digitized map of all possible links among documents.

Page and Brin started from the insight that the web “was loosely based on the premise of citation and annotation—after all, what is a link but a citation,and what was the text describing that link but annotation.” Page himself saw this citational logic as the key to modeling the Web’s own structure. Modernacademic citation is simply the practice of pointing to other people’s work—very much like the footnote. As we saw with Enlightenment journals, a citation

26

27

28

29

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

4 of 7

not only lists important information about another work, but also confers authority on that work: “the process of citing others confers their rank andauthority upon you—a key concept that informs the way Google works.”

With his original Google project, Page wanted to trace all of the links that connected different pages on the Web, not only the outgoing links, but also theirbackward paths. Page argued that pure computational power could produce a more complete model of the citational structure of the Web—a map ofinterlinked and interdependent documents by means of tracing hyperlinked citations. He intended to exploit what computer scientists refer to as the WebGraph—the set of all nodes, corresponding to static html pages, with directed hyperlinks from page A to page B. In early 1998 there were an estimated 150million nodes joined by 2 billion links.

Other search engines, however, had had this modeling idea before. Given the proliferation of Web pages and with them hyperlinks, Brin and Page, like allother search engineers, knew they had to scale up “to keep up with the growth of the web.” By 1994 the World Wide Web Worm (WWWW) had indexed110,000 pages, but by 1997 WebCrawler had indexed over 100 million Web documents. As Brin and Page put it in 1998, it was “foreseeable” that by 2000a comprehensive index would contain over a billion documents. They were not merely intent on indexing pages or modeling all of the links betweendocuments on the Web, however. They were also interested in increasing the “quality of results” that search engines returned. In order for searches toimprove, their search engine would focus not just on the comprehensiveness, but on the relevance or quality of its results.

The insight that made Google Google was the recognition that all links and all pages are not equal. In designing their link analysis algorithm, PageRank,Brin and Page recognized that the real power of this citational logic rested not just in counting links from all pages equally, but in “normalizing by thenumber of links on a page.” The key difference between Google and early digital search technologies (like the WWWW and the early Yahoo) was that itdid not simply count or collate citations. Other early search engines were too descriptive, too neutral. Brin and Page reasoned that users wanted help notjust in collecting but in evaluating all of those millions of webpages. From its beginnings at Stanford, the PageRank algorithm modeled the normative valueof one page over another. It was concerned not simply with questions of completeness or managerial efficiency, but of value. It exploited the often-overlooked fact that hyperlinks, like those Enlightenment footnotes, not only connected document to document, but offered an implicit evaluation. Thetechnology of the hyperlink, like the footnote, is not neutral but laden with normative evaluations.

The Algorithmic Self

In conclusion, I would like to forestall a possible concern that in historicizing information overload, I risk eliding the particularity of our own digital worldand dismissing valid concerns, like Carr’s, about how we interact with our digital technologies. In highlighting the analogies between Google andEnlightenment print culture, I have attempted to resist the alarmism and utopianism that tend to frame current discussions of our digital culture, first byhistoricizing these concerns and second by demonstrating that technology needs to be understood in deep, embodied connection with the human.Considered in these terms, the question of whether Google is making us stupid or smart might give way to more complex and productive questions. What,for example, is the idea of the human person underlying Google’s efforts to organize the world’s information and what forms of human life does it facilitate?

In order to address such questions, we need to understand that the Web relies on us as much as we rely on it. Every time we click, type in a search term,or update our Facebook status, the Web changes just a bit. “Google might not be making us stupid but we are making it (and Facebook) smarter” becauseof all the information that we feed them both every day. The links that make up the Web are evidence of this. They not only point to other pages, butalso highlight the contingency of the Web’s structure by highlighting how the Web at any given moment is produced, manipulated, and organized byhundreds of millions of individual users. Links embody the contingency of the Web, its historical and ever-changing structure of which humans are anessential element.

Thinking more in terms of a digital ecology or environment and less in a human vs. technology dichotomy, we can understand the Web, as James Hendler,Tim Berners-Lee, and colleagues recently put it, not just as an isolated machine “to be engineered for improved performance,” but as a “phenomenon withwhich we interact.” They write, “at the micro-scale, the Web is an infra-structure of artificial languages and protocols; it is a piece of engineering. However,it is the interaction of human beings creating, linking, and consuming information that generates the Web’s behavior as emergent properties at the macro-scale.”

It is at this level of analysis, where the human and its technologies are inextricable and together form something like a digital ecology, that we can, forexample, evaluate a recent claim of one of Google’s founders. Discussing the future of the search firm, Page described the “perfect search engine” as thatwhich would “understand exactly what I mean and give me back exactly what I want.” Such an “understanding,” however, is a function of the implicitnormativity of the citational logic that Google’s search engine shares with the Enlightenment footnote. These technologies never leave our desires andthoughts unmediated and unmanipulated. But Google’s search engines transform the normativity of the citational logic of the footnote in important andparticular ways that have come to distinguish the digital age from the print age. Whereas an Enlightenment reader might have been able to connect four orfive footnotes without much effort, Google’s search engine follows hundreds of millions of links in a fraction of a second. The embodied human can all tooeasily seem to disappear at such scales. If, as I have done above, the relevance of technology has to be argued for in the Enlightenment, then the inverseis the case for our digital age—the relevance of the embodied human agent has to be argued for today.

On the one hand, individual human persons play a rather insignificant role in Google’s operations. When we conduct a search on Google, the process ofevaluation is fundamentally different from the form of evaluation tied to the footnote. Because Google’s search engine operates at such massive scales, itevaluates and normalizes links (judges which ones are relevant) through a recursive function. PageRank is an iterative algorithm—all outputs becomeinputs in an endless loop. The value of something on the Web is determined simply by the history of what millions of users have valued—that is, its inputsare always a function of its outputs. It is a highly scaled-up feedback loop. A Google search can only ever retrieve what is already in a document. It canonly ever find what is known to the system of linked documents. The system is defined not by a particular object, operator, or node within the system, butrather by the history of the algorithm’s own operations.

If my son’s Web page on the construction of his tree house has no incoming links, then his page, practically speaking, does not exist according toPageRank’s logic. Google web crawlers will not find it—or if they do, it will have a very low rank—and thus, because we experience the Web throughGoogle, neither will you. The freedom of the Web—the freedom to link and follow links—is a function of the closed and recursive nature of the system, onethat includes by necessarily excluding. Most contemporary search engines, Google chief among them, now share the assumption that a “hyperlink” is amarker of authority or endorsement. Serendipity is nearly impossible in such a document-centric Web. Questions of value and authority are functions ofand subject to the purported wisdom of the digital crowd that is itself a normalized product of an algorithmic calculation of value and authority.

The normative “I” that Google assumes, the “I” that Page’s perfect search engine would understand, is an algorithmic self. It is a function of a citationallogic that has been extended to an algorithmic logic. It is an “I” constructed by a limited and fundamentally contingent Web marked by our own history ofsearches, our own well-worn paths. What I want at any given moment is forever defined by what I have always wanted or what my demographic othershave always wanted.

On the other hand, individual human persons are central agents in Google’s operations because they author hyperlinks. Columnists like Paul Krugman andPeggy Noonan make decisions about what to link to and what not to link to in their columns. Similarly, as we click from link to link (or choose not to click),we too make decisions and judgments about the value of a link and thus of the document that hosts it.

30

31

32

33

34

35

36

37

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

5 of 7

Because algorithms increase the scale of such operations by processing millions of links, however, they obscure this more human element of the Web. All ofthose decisions to link from one particular page to the next, to click from one link to the next involve not just a link-fed algorithm, but hundreds of millionsof human persons interacting with Google every minute. These are the human interactions that have an impact on the Web at the macro-level, and theyare concealed by the promises of the Google search box.

Only at this macro-level of analysis can we make sense of the fact that Google’s search algorithms do not operate in absolute mechanical purity, free ofoutside interference. Only if we understand the Web and our search and filter technologies as elements in a digital ecology can we make sense of theemergent properties of the complex interactions of humans and technology: gaming the Google system through search optimization strategies, the decisionby Google employees (not algorithms) to ban certain webpages and privilege others (ever notice the relatively recent dominance of Wikipedia pages inGoogle searches?). The Web is not just a technology but an ecology of human-technology interaction. It is a dynamic culture with its own norms andpractices.

New technologies, be it the printed encyclopedia or Wikipedia, are not abstract machines that independently render us stupid or smart. As we saw withEnlightenment reading technologies, knowledge emerges out of complex processes of selection, distinction, and judgment—out of the irreducibleinteractions of humans and technology. We should resist the false promise that the empty box below the Google logo has come to represent—eitherunmediated access to pure knowledge or a life of distraction and shallow information. It is a ruse. Knowledge is hard won; it is crafted, created, andorganized by humans and their technologies. Google’s search algorithms are only the most recent in a long history of technologies that humans havedeveloped to organize, evaluate, and engage their world.

E N D N O T E S

Ann Blair, “Information Overload, the Early Years,” The Boston Globe (28 November 2010): <http://www.boston.com/bostonglobe/ideas/articles/2010/11/28/information_overload_the_early_years/>.

1.

Mark N. Hansen, Embodying Technesis: Technology beyond Writing (Ann Arbor: University of Michigan Press, 2010) 235.2.

Kevin Kelly, “Scan This Book!,” The New York Times (14 May 2006): <http://www.nytimes.com/2006/05/14/magazine/14publishing.html?pagewanted=all>.

3.

Randall Stross, “World’s Largest Social Network: The Open Web,” The New York Times (15 May 2010): <http://www.nytimes.com/2010/05/16/business/16digi.html>.

4.

The most euphoric among them speak of a coming “singularity” when computer intelligence will exceed human intelligence.5.

For a less utopian and more nuanced account of a post-human era, see Friedrich Kittler, Gramophone, Film, Typewriter, trans. GeoffreyWinthrop-Young and Michael Wutz (Palo Alto: Stanford University Press, 1999).

6.

Nicholas Carr, “Is Google Making Us Stupid?: What the Internet Is Doing to Our Brains,” The Atlantic (July–August 2008):<http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/>. See also the expansion of his argument in TheShallows: What the Internet Is Doing to Our Brains (New York: Norton, 2010).

7.

Quoted in Ann Blair, Too Much to Know: Managing Scholarly Information before the Modern Age (New Haven: Yale University Press, 2010) 15. Thefollowing historical account draws on Blair’s work.

8.

Quoted in Stuart Brown, “The Seventeenth-Century Intellectual Background,” The Cambridge Companion to Leibniz, ed. Nicholas Jolley (New York:Cambridge University Press, 1995) 61 n28.

9.

Johann Gottfried Herder, Briefe zur Beförderung der Humanität (Berlin and Weimar: Aufbau-Verlag, 1971) II: 92–93.10.

A review of Christian Thomasius’s Observationum selectarum ad rem litterariam spectantium [Select Observations Related to Learning], volume II(Halle, 1702), which was published in the April 1702 edition of the monthly British newspaper History of the Works of the Learned, Or an ImpartialAccount of Books Lately Printed in all Parts of Europein, as cited in David McKitterick, “Bibliography, Bibliophily and Organization of Knowledge,” TheFoundations of Knowledge: Papers Presented at Clark Library (Los Angeles: Willam Andrews Clark Memorial Library, 1985) 202.

11.

Johann Georg Heinzmann, Appell an meine Nation: Über die Pest der deutschen Literatur (Bern: 1795) 125.12.

Immanuel Kant, Philosophical Encyclopedia, 29:30, in Kant’s Gesammelte Schriften, ed. Königliche Preußische (later Deutsche) Akademie derWissenschaften (Berlin: Walter de Gruyter, 1902–present).

13.

See Richard R. Yeo, Encyclopaedic Visions: Scientific Dictionaries and Enlightenment Culture (Cambridge: Cambridge University Press, 2001).14.

Blair, Too Much to Know, 34.15.

Blair, Too Much to Know, 53.16.

Blair, Too Much to Know, 8.17.

Herder quoted in Ernst Behler, “Friedrich Schegels Enzyklopädie der literarischen Wissenschaften im Unterschied zu Hegels Enzyklopädie derphilosophischen Wissenschaften,” Studien zur Romantik und idealistischen Philosophie (Paderborn: Schöningh, 1988) 246.

18.

From an interview with Nicholas Carr available at <http://bigthink.com/nicholarcarr>.19.

Kittler xxxix.20.

Elizabeth L. Eisenstein, The Printing Revolution in Early Modern Europe (New York: Cambridge University Press, 2005) 332.21.

Francis Bacon, “On Studies,” Essays with Annotations (Boston: Lee and Shepar, 1884) 482.22.

Much of this work has been done in German-language scholarship. For an English-language overview, see Guglielmo Cavallo and Roger Chartier,eds., A History of Reading in the West (Amherst: University of Massachusetts Press, 1999).

23.

See Jonathan Sheehan, The Enlightenment Bible: Translation, Scholarship, Culture (Princeton: Princeton University Press, 2005).24.

Immanuel Kant, Anthropologie, 7:208 in Kant’s Gesammelte Schriften, ed. Königliche Preußische (later Deutsche) Akademie der Wissenschaften(Berlin: Walter de Gruyter, 1902–present).

25.

Hansen 271n8.26.

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

6 of 7

Anthony Grafton, The Footnote: A Curious History (Cambridge, MA: Harvard University Press, 1997) 32. Footnotes of this sort go back to at least theseventeenth century. John Selden’s History of Tithes (1618) and Johannes Eisenhart De fie historica (1679), which emphasized the importance ofciting sources, reveal the process of knowledge production.

27.

Grafton 32.28.

John Batelle, The Search: How Google and its Rivals Rewrote the Rules of Business and Transformed Our Culture (New York: Portfolio, 2005) 72.29.

Batelle 70.30.

James Glieck, The Information: A History, a Theory, a Flood (New York: Pantheon, 2011) 423.31.

Sergey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems 30(1998): 107–17.

32.

Brin and Page.33.

Siva Vaidhyanathan, The Googlization of Everything (And Why We Should Worry) (Berkeley: University of California Press, 2011) 182.34.

James Hendler, et al., “Web Science: An Interdisciplinary Approach to Understanding the Web,” Communications of the ACM 51.7 (July 2008):60–69.

35.

Larry Page, as quoted at <http://www.google.com/about/corporate/company/tech.html>.36.

Critics of Google’s document-centric search technologies have long been promising the advent of a semantic web that would “free” data from adocument-based web. Some see social media tools like Facebook and Twitter as offering something similar. For an early vision of what this mightlook like, see Tim Berners-Lee, James Hendler, and Ora Lassila, “The Semantic Web,” The Scientific American (17 May 2001): 34–43.

37.

Chad Wellmon is Assistant Professor of German Studies at the University of Virginia. He is the author of Becoming Human: Romantic Anthropology and theEmbodiment of Freedom (2010) and is currently finishing a book on eighteenth-century information overload and the modern research university.

Copyright © 2007-2012, IASC. All rights reserved. Site map.

THR Home | About | Issues | Contact | Order

http://www.iasc-culture.org/THR/THR_article_2012_Spring_Wellmon.php

7 of 7