self reference

77
Self-reference From Wikipedia, the free encyclopedia

Upload: man

Post on 12-Jan-2016

22 views

Category:

Documents


0 download

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

TRANSCRIPT

Page 1: Self Reference

Self-referenceFrom Wikipedia, the free encyclopedia

Page 2: Self Reference

Contents

1 Autogram 11.1 Self-enumerating pangrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Reflexicons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Autological word 32.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Circular reference 53.1 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2 In computer programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3 Circular references in spreadsheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Corecursion 84.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4.1.1 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.1.2 Fibonacci sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.1.3 Tree traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.4 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5 Fumblerules 14

i

Page 3: Self Reference

ii CONTENTS

5.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

6 Hofstadter’s law 166.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

7 I (pronoun) 187.1 Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187.2 Capitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187.3 Me as a subject pronoun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197.4 Older versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

8 Impredicativity 218.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

9 Indirect self-reference 249.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

10 Liar paradox in early Islamic tradition 2610.1 Athīr and the Liar paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2610.2 Nasir al-Din al-Tusi on the Liar paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

11 Non-well-founded set theory 3111.1 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3111.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3211.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3211.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3211.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3311.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3311.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

12 Recursion 34

Page 4: Self Reference

CONTENTS iii

12.1 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3412.2 Informal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3512.3 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

12.3.1 Recursive humor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3612.4 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

12.4.1 Recursively defined sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3612.4.2 Finite subdivision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.4.3 Functional recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.4.4 Proofs involving recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.4.5 Recursive optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

12.5 In computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.6 In art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3812.7 The recursion theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

12.7.1 Proof of uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3812.7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

12.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3912.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3912.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4012.11External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

13 Recursive acronym 4513.1 Computer-related examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

13.1.1 Notable examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4513.2 Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4713.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4713.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4813.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

14 Self-reference 4914.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4914.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

14.2.1 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4914.2.2 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5014.2.3 In literature, film, and popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

14.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5114.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5114.5 Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

15 Self-referential encoding 5215.1 Self-concept and self-schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

15.1.1 Self-awareness and personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5315.2 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Page 5: Self Reference

iv CONTENTS

15.3 Types of self-referential encoding tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5515.4 Explanations for the self-reference effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

15.4.1 Elaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5515.4.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5515.4.3 Dual process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

15.5 Social brain science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5615.5.1 Depth of processing or cognitive structure . . . . . . . . . . . . . . . . . . . . . . . . . . 5715.5.2 Simulation theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

15.6 Expansion of the SRE: group reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5715.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

15.7.1 Autism spectrum disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5915.7.2 Depression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

15.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

16 Self-referential humor 6216.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6216.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6216.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

17 Tupper’s self-referential formula 6417.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6517.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6517.3 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

18 Universal set 6618.1 Reasons for nonexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

18.1.1 Russell’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6618.1.2 Cantor’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

18.2 Theories of universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6618.2.1 Restricted comprehension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6718.2.2 Universal objects that are not sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

18.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6718.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6718.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6818.6 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 69

18.6.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6918.6.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7118.6.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Page 6: Self Reference

Chapter 1

Autogram

An autogram (Greek: αὐτός = self, γράμμα = letter) is a sentence that describes itself in the sense of providingan inventory of its own characters. They were invented by Lee Sallows, who also coined the word ‘autogram’.[1] Anessential feature is the use of full cardinal number names such as “one”, “two”, etc., in recording character counts.Autograms are also called ‘self-enumerating’ or ‘self-documenting’ sentences. Often, letter counts only are recordedwhile punctuation signs are ignored, as in this example:

This sentence employs two a’s, two c’s, two d’s, twenty-eight e’s, five f’s, three g’s, eight h’s, eleven i’s,three l’s, two m’s, thirteen n’s, nine o’s, two p’s, five r’s, twenty-five s’s, twenty-three t’s, six v’s, ten w’s,two x’s, five y’s, and one z.

The first autogram to be published was composed by Lee Sallows in 1982 and appeared in Douglas Hofstadter'sMetamagical Themas column in Scientific American.[2]

Only the fool would take trouble to verify that his sentence was composed of ten a’s, three b’s, four c’s,four d’s, forty-six e’s, sixteen f’s, four g’s, thirteen h’s, fifteen i’s, two k’s, nine l’s, four m’s, twenty-fiven’s, twenty-four o’s, five p’s, sixteen r’s, forty-one s’s, thirty-seven t’s, ten u’s, eight v’s, eight w’s, fourx’s, eleven y’s, twenty-seven commas, twenty-three apostrophes, seven hyphens and, last but not least, asingle !

The task of producing an autogram is perplexing because the object to be described cannot be known until its de-scription is first complete.[3][4]

1.1 Self-enumerating pangrams

A type of autogram that has attracted special interest is the autogramic pangram, a self-enumerating sentence inwhich every letter of the alphabet occurs at least once.[5] Certain letters do not appear in either of the two autogramsabove, which are therefore not pangrams. The first ever self-enumerating pangram appeared in a Dutch newspaperand was composed by Rudy Kousbroek.[6][7][8] Sallows, who lives in the Netherlands, was challenged by Kousbroek toproduce a self-enumerating ‘translation’ of this pangram into English—an impossible-seeming task. This promptedSallows to construct an electronic Pangram Machine.[1] Eventually the machine succeeded, producing the examplebelow which was published in Scientific American in October 1984:[9]

This pangram contains four as, one b, two cs, one d, thirty es, six fs, five gs, seven hs, eleven is, one j,one k, two ls, two ms, eighteen ns, fifteen os, two ps, one q, five rs, twenty-seven ss, eighteen ts, two us,seven vs, eight ws, two xs, three ys, & one z.

1.2 Generalizations

Autograms exist that exhibit extra self-descriptive features. Besides counting each letter, here the total number ofletters appearing is also named:[10][11]

1

Page 7: Self Reference

2 CHAPTER 1. AUTOGRAM

This sentence contains one hundred and ninety-seven letters: four a’s, one b, three c’s, five d’s, thirty-foure’s, seven f’s, one g, six h’s, twelve i’s, three l’s, twenty-six n’s, ten o’s, ten r’s, twenty-nine s’s, nineteent’s, six u’s, seven v’s, four w’s, four x’s, five y’s, and one z.

Just as an autogram is a sentence that describes itself, so there exist closed chains of sentences each of which describesits predecessor in the chain. Viewed thus, an autogram is such a chain of length 1. Here follows a chain of length2:[10][11]

1.3 Reflexicons

A special kind of autogram is the ‘reflexicon’ (short for “reflexive lexicon”), which is a self-descriptive word list thatdescribes its own letter frequencies. The constraints on reflexicons are much tighter than on autograms because thefreedom to choose alternative words such as “contains”, “comprises”, “employs”, and so on, is lost. However, a degreeof freedom still exists through appending entries to the list that are strictly superfluous.For example, “Sixteen e’s, six f’s, one g, three h’s, nine i’s, nine n’s, five o’s, five r’s, sixteen s’s, five t’s, three u’s, fourv’s, one w, four x’s” is a reflexicon, but it includes what Sallows calls “dummy text” in the shape of “one g” and “onew”. The latter might equally be replaced with “one #”, where “#” can be any typographical sign not already listed.According to Sallows, there exist but two and only two pure (i.e., no dummy text) English reflexicons:[11]

1.4 References[1] Sallows, L., In Quest of a Pangram, Abacus, Vol 2, No 3, Spring 1985, pp 22–40

[2] Hofstadter, D.R. “Metamagical Themas” Scientific American, January 1982, pp 12–17

[3] Hofstadter, D.R., Metamagical Themas: Questing for the Essence of Mind and Pattern, 1996, p. 390–2, Basic Books,ISBN 978-0-465-04566-2

[4] Letaw J.R. Pangrams: A Nondeterministic Approach, Abacus, Vol 2, No 3, Spring 1985, pp 42–7

[5] Encyclopedia of Science: self-enumerating sentence

[6] Kousbroek, R., “Welke Vraag Heeft Vierendertig Letters?” NRC Handelsblad, Cultureel Supplement 640, 11 Feb. 1983,p.3.

[7] Kousbroek, R. “Instructies Voor Het Demonteren Van Een Bom,” NRC Handelsblad, Cultereel Supplement 644, 11 March1983, p.9.

[8] Kousbroek, R. “De Logologische Ruimte” Amsterdam: Meulenhoff, 1984, pp 135–53.

[9] Dewdney, A.K. “Computer Recreations” Scientific American, October 1984, pp 18–22

[10] Self-enumerating pangrams: A logological history by Eric Wassenaar, April 17, 1999

[11] Sallows, L., Reflexicons, Word Ways, August 1992, 25; 3: 131–41

1.5 External links• Autograms in various languages

Page 8: Self Reference

Chapter 2

Autological word

An autological word (also called homological word or autonym) is a word that expresses a property that it alsopossesses (e.g. the word “short” is short, “noun” is a noun, “English” is English, “pentasyllabic” has five syllables,“word” is a word, “sesquipedalian” is a long word; see Wiktionary for a partial list). The opposite is a heterologicalword, one that does not apply to itself (e.g. “long” is not long, “verb” is not typically a verb, “monosyllabic” has fivesyllables, “German” is not German, etc.).

2.1 Overview

Unlikemore general concepts of autology and self-reference, this particular distinction and opposition of “autological”and “heterological words” is uncommon in linguistics for describing linguistic phenomena or classes of words, butis current in logic and philosophy where it was introduced by Kurt Grelling and Leonard Nelson for describing asemantic paradox, later known as Grelling’s paradox or the Grelling–Nelson paradox.[1]

The fame of this paradox later extending also to non-academic circles has created a more widespread popular interest,expressing itself in more recent times also in the creation of lists of autological words.[2]

One source of autological words are archetypal words (ostensive definition) – words chosen to describe a phenomenonby using an example of the phenomenon, which are thus necessarily autological. One such example is a mondegreen– a mishearing of a phrase, which itself is based on a mishearing of “And laid him on the green” as “And LadyMondegreen”.A word’s status as autological may change over time. For example, neologism was once an autological word but nolonger is; similarly, protologism (a word invented recently by literary theorist Mikhail Epstein) may or may not loseits autological status depending on whether or not it gains wider usage.

2.2 Paradox

Main article: Grelling–Nelson paradox

The word autological itself may or may not be an autological word. It demonstrates an infinite regress: any wordis autological if its appearance expresses its own meaning, so autological is autological if autological expresses theproperty of a word expressing its own meaning. Whether the word 'heterological' is itself heterological is an evenmore problematic paradox.

2.3 References[1] Grelling and Nelson used the following definition when first publishing their paradox in 1908: “Let φ(M) be the word that

denotes the concept defining M. This word is either an element of M or not. In the first case we will call it 'autological',in the second 'heterological'.” (Peckhaus 1995, p. 269). An earlier version of Grelling’s paradox had been presented by

3

Page 9: Self Reference

4 CHAPTER 2. AUTOLOGICAL WORD

Nelson in a letter to Gerhard Hessenberg on 28 May 1907, where “heterological” is not yet used and “autological words”are defined as “words that fall under the concepts denoted by them” (Peckhaus 1995, p. 277)

[2] Henry Segerman: Autological words; Wiktionary: English autological terms

2.4 Further reading• Volker Peckhaus: The Genesis of Grelling’s Paradox, in: Ingolf Max / Werner Stelzner (eds.), Logik undMathematik: Frege-Kolloquium Jena 1993, Walter de Gruyter, Berlin 1995 (Perspektiven der analytischenPhilosophie, 5), pp. 269–280

• Simon Blackburn: The Oxford Dictionary of Philosophy, Oxford University Press, 2nd ed. Oxford 2005, p. 30(“autological”), p. 170 (“heterological”), p. 156 (“Grelling’s paradox”)

2.5 External links• A list of autological words from Henry Segerman

• A brief look into the different types of autology by Ionatan Waisgluss

Page 10: Self Reference

Chapter 3

Circular reference

For the Wikipedia policy in regard to reliable sources, see Wikipedia:Circular reference.

A circular reference is a series of references where the last object references the first, resulting in a closed loop.

Circular reference (in red)

5

Page 11: Self Reference

6 CHAPTER 3. CIRCULAR REFERENCE

3.1 In language

A circular reference is not to be confused with the logical fallacy of a circular argument. Although a circular referencewill often be unhelpful and reveal no information, such as two entries in a book index referring to each other, it is notnecessarily so that a circular reference is of no use. Dictionaries, for instance, must always ultimately be a circularreference since all words in a dictionary are defined in terms of other words, but a dictionary nevertheless remains auseful reference. Sentences containing circular referents can still be meaningful;

Her brother gave her a kitten; his sister thanked him for it.

is circular but not without meaning. Indeed, it can be argued that self-reference is a necessary consequence ofAristotle’s Law of non-contradiction, a fundamental philosophical axiom. In this view, without self-reference, logicand mathematics become impossible, or at least, lack usefulness.[1][2]

3.2 In computer programming

For circular references between objects or resources, see Reference counting.

Circular references can appear in computer programming when one piece of code requires the result from another,but that code needs the result from the first. For example:

Function A will show the time the sun last set based on the current date, which it can obtain by callingFunction B. Function B will calculate the date based on the number of times the moon has orbited theearth since the last time Function B was called. So, Function B asks Function C just how many timesthat is. Function C doesn't know, but can figure it out by calling Function A to get the time the sun lastset.

The entire set of functions is now worthless because none of them can return any useful information whatsoever. Thisleads to what is technically known as a livelock. It also appears in spreadsheets when two cells require each other’sresult. For example, if the value in Cell A1 is to be obtained by adding 5 to the value in Cell B1, and the value inCell B1 is to be obtained by adding 3 to the value in Cell A1, no values can be computed. (Even if the specificationsare A1:=B1+5 and B1:=A1-5, there is still a circular reference. It doesn't help that, for instance, A1=3 and B1=−2would satisfy both formulae, as there are infinitely many other possible values of A1 and B1 that can satisfy bothinstances.)A circular reference represents a big problem in computing. A deadlock occurs when two or more processes areeach waiting for another to release a resource. Most relational databases such as Oracle and SQL Server do notallow circular referencing because there is always a problem when deleting a row from a table having dependencies toanother row from another table (foreign key) which refers to the row being deleted. From the technical documentationatMicrosoft: The FOREIGNKEY constraints cannot be used to create a self-referencing or circular FOREIGNKEYconstraint.[3]

ForOracle and PostgreSQL the problemwith updating a circular reference can be solved by defining the correspondingforeign keys as deferrable (See CREATE TABLE for PostgreSQL and DEFERRABLE Constraint Examples forOracle). In that case the constraint is checked at the end of the transaction not at the time the DML statement isexecuted. To update a circular reference two statements can be issued in a single transaction that will satisfy bothreferences once the transaction is committed.Only inner joins are supported and are specified by a comparison of columns from different tables. Circular joins arenot supported. A circular join is a SQL query that links three or more tables together into a circuit.[4] Oracle usesthe term Cyclic to designate a circular reference.[5]

A distinction should be made with processes containing a circular reference between those that are incomputable andthose that are an iterative calculation with a final output. The latter may fail in spreadsheets not equipped to handlethem but are nevertheless still logically valid.[2]

Page 12: Self Reference

3.3. CIRCULAR REFERENCES IN SPREADSHEETS 7

3.3 Circular references in spreadsheets

Circular reference in worksheets can be a very useful technique for solving implicit equations such as the Colebrookequation and many others, which might otherwise require tedious Newton-Raphson algorithms in VBA or use ofmacros.[6]

3.4 See also• MS Fnd in a Lbry

• Nested function

• Halting problem

• Catch-22 (logic)

• There’s a hole in the bucket

• Regress argument

• Pseudohistory

• Chicken or the egg

• Circular reporting

3.5 References[1] Terry A. Osborn, The future of foreign language education in the United States, pp.31-33, Greenwood Publishing Group,

2002 ISBN 0-89789-719-6.

[2] Robert Fiengo, Robert May, Indices and identity, pp.59-62, MIT Press, 1994 ISBN 0-262-56076-3.

[3] “Microsoft TechNet - SQL Server CREATE TABLE”..

[4] “Microsoft Developer Network - SQL Syntax”..

[5] “Oracle Documentation - SQL Error Messages”..

[6] “Solve Implicit Equations Inside Your Worksheet By Anilkumar M, Dr Sreenivasan E and Dr Raghunathan K”..

Page 13: Self Reference

Chapter 4

Corecursion

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analyti-cally, starting on data further from a base case and breaking it down into smaller data and repeating until one reachesa base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing datafurther removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce,bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept isgenerative recursion which may lack a definite “direction” inherent in corecursion and recursion.Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data(base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, suchas streams, so long as it can be produced from simple data (base cases). Where recursion may not terminate, neverreaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, thoughit may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it producesand thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, andarguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for examplerecurrence relations such as the factorial.Corecursion can produce both finite and infinite data structures as result, and may employ self-referential data struc-tures. Corecursion is often used in conjunction with lazy evaluation, to only produce a finite subset of a potentiallyinfinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly im-portant concept in functional programming, where corecursion and codata allow total languages to work with infinitedata structures.

4.1 Examples

Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarilyof interest in functional programming, it can be illustrated using imperative programming, which is done below us-ing the generator facility in Python. In these examples local variables are used, and assigned values imperatively(destructively), though these are not necessary in corecursion in pure functional programming. In pure functionalprogramming, rather than assigning to local variables, these computed values form an invariable sequence, and priorvalues are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be com-puted). The assignments simply express this in the imperative paradigm and explicitly specify where the computationshappen, which serves to clarify the exposition.

4.1.1 Factorial

Aclassic example of recursion is computing the factorial, which is defined recursively as 0! := 1 andn! := n×(n−1)!

To recursively compute its result on a given input, a recursive function calls (a copy of) itself with a different (“smaller”in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless thebase case has been reached. Thus a call stack develops in the process. For example, to compute fac(3), this recursivelycalls in turn fac(2), fac(1), fac(0) (“winding up” the stack), at which point recursion terminates with fac(0) = 1, andthen the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial

8

Page 14: Self Reference

4.1. EXAMPLES 9

call frame fac(3), where the final result is calculated as 3*2 =: 6 and finally returned. In this example a functionreturns a single value.This stack unwinding can be explicated, defining the factorial corecursively, as an iterator, where one starts with thecase of 1 =: 0! , then from this starting value constructs factorial values for increasing numbers 1, 2, 3... as in theabove recursive definition with “time arrow” reversed, as it were, by reading it backwards as n!× (n+1) =: (n+1)!. The corecursive algorithm thus defined produces a stream of all factorials. This may be concretely implementedas a generator. Symbolically, noting that computing next factorial value requires keeping track of both n and f (aprevious factorial value), this can be represented as:

n, f = (0, 1) : (n+ 1, f × (n+ 1))

or in Haskell,(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)

meaning, “starting from n, f = 0, 1 , on each step the next values are calculated as n + 1, f × (n + 1) ". This ismathematically equivalent and almost identical to the recursive definition, but the +1 emphasizes that the factorialvalues are being built up, going forwards from the starting case, rather than being computed after first going back-wards, down to the base case, with a −1 decrement. Note also that the direct output of the corecursive functiondoes not simply contain the factorial n! values, but also includes for each value the auxiliary data of its index n in thesequence, so that any one specific result can be selected among them all, as and when needed.Note the connection with denotational semantics, where the denotations of recursive programs is built up corecursivelyin this way.In Python, a recursive factorial function can be defined as:[lower-alpha 1]

def factorial(n): if n == 0: return 1 else: return n * factorial(n - 1)

This could then be called for example as factorial(5) to compute 5!.A corresponding corecursive generator can be defined as:def factorials(): n, f = 0, 1 while True: yield f n, f = n + 1, f * (n + 1)

This generates an infinite stream of factorials in order; a finite portion of it can be produced by:def n_factorials(k): n, f = 0, 1 while n <= k: yield f n, f = n + 1, f * (n + 1)

This could then be called to produce the factorials up to 5! via:for f in n_factorials(5): print(f)

If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and theaccess into one function,def nth_factorial(k): n, f = 0, 1 while n < k: n, f = n + 1, f * (n + 1) yield f

As can be readily seen here, this is practically equivalent (just by substituting return for the only yield there) to theaccumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the conceptof corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, whereapplicable.

4.1.2 Fibonacci sequence

In the same way, the Fibonacci sequence can be represented as:

a, b = (0, 1) : (b, a+ b)

Page 15: Self Reference

10 CHAPTER 4. CORECURSION

Note that because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track twosuccessive terms, with the (b,−) corresponding to shift forward by one step, and the (−, a + b) corresponding tocomputing the next term. This can then be implemented as follows (using parallel assignment):def fibonacci_sequence(): a, b = 0, 1 while True: yield a a, b = b, a + b

In Haskell,map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )

4.1.3 Tree traversal

Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can verynaturally be implemented via corecursion.Without using recursion or corecursion, one may traverse a tree by starting at the root node, placing the child nodesin a data structure, then removing the nodes in the data structure in turn and iterating over its children.[lower-alpha 2] Ifthe data structure is a stack (LIFO), this yields depth-first traversal, while if the data structure is a queue (FIFO), thisyields breadth-first traversal.Using recursion, a (post-order)[lower-alpha 3] depth-first traversal can be implemented by starting at the root node andrecursively traversing each child subtree in turn (the subtree based at each child node) – the second child subtree doesnot start processing until the first child subtree is finished. Once a leaf node is reached or the children of a branchnode have been exhausted, the node itself is visited (e.g., the value of the node itself is outputted). In this case, thecall stack (of the recursive functions) acts as the stack that is iterated over.Using corecursion, a breadth-first traversal can be implemented by starting at the root node, outputting its value,[lower-alpha 4]then breadth-first traversing the subtrees – i.e., passing on the whole list of subtrees to the next step (not a single sub-tree, as in the recursive approach) – at the next step outputting the value of all of their root nodes, then passing ontheir child subtrees, etc.[lower-alpha 5] In this case the generator function, indeed the output sequence itself, acts as thequeue. As in the factorial example (above), where the auxiliary information of the index (which step one was at, n)was pushed forward, in addition to the actual output of n!, in this case the auxiliary information of the remainingsubtrees is pushed forward, in addition to the actual output. Symbolically:v,t = ([], FullTree) : (RootValues, ChildTrees)meaning that at each step, one outputs the list of values of root nodes, then proceeds to the child subtrees. Generatingjust the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the listof lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list).These can be compared as follows. The recursive traversal handles a leaf node (at the bottom) as the base case (whenthere are no children, just output the value), and analyzes a tree into subtrees, traversing each in turn, eventuallyresulting in just leaf nodes – actual leaf nodes, and branch nodes whose children have already been dealt with (cutoff below). By contrast, the corecursive traversal handles a root node (at the top) as the base case (given a node, firstoutput the value), treats a tree as being synthesized of a root node and its children, then produces as auxiliary outputa list of subtrees at each step, which are then the input for the next step – the child nodes of the original root arethe root nodes at the next step, as their parents have already been dealt with (cut off above). Note also that in therecursive traversal there is a distinction between leaf nodes and branch nodes, while in the corecursive traversal thereis no distinction, as each node is treated as the root node of the subtree it defines.Notably, given an infinite tree,[lower-alpha 6] the corecursive breadth-first traversal will traverse all nodes, just as for afinite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeedif traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf.This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures.In Python, this can be implemented as follows.[lower-alpha 7] The usual post-order depth-first traversal can be definedas:[lower-alpha 8]

def df(node): if node is not None: df(node.left) df(node.right) print(node.value)

This can then be called by df(t) to print the values of the nodes of the tree in post-order depth-first order.The breadth-first corecursive generator can be defined as:[lower-alpha 9]

Page 16: Self Reference

4.2. DEFINITION 11

def bf(tree): tree_list = [tree] while tree_list: new_tree_list = [] for tree in tree_list: if tree is not None: yieldtree.value new_tree_list.append(tree.left) new_tree_list.append(tree.right) tree_list = new_tree_list

This can then be called to print the values of the nodes of the tree in breadth-first order:for i in bf(t): print(i)

4.2 Definition

Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphismis then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpointof a type equation; the isomorphism is then given by a final coalgebra.If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not.[1][2] On the other hand, if the domain of discourse is the categoryof complete partial orders and continuous functions, which corresponds roughly to the Haskell programming lan-guage, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form anisomorphism.[3]

Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dualto the way that ordinary recursion recursively defines functions whose domain is an initial data type.[4]

The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if onewere to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistentwith existing textbooks about Haskell.[5] Also note that the examples used in this article predate the attempts to definecorecursion and explain what it is.

4.3 Discussion

The rule for primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending onthe argument by pattern-matching on its constructors (that were called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. “fields”), we ascend on the result by filling-in its “destructors”(or “observers”, that will be called afterwards, somewhere - so we're actually calling a constructor, creating anotherbit of the result to be observed later on). Thus corecursion creates (potentially infinite) codata, whereas ordinaryrecursion analyses (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it mightnot terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.Here is an example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes areever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniquesare an important part of Haskell programming.This can be done in Python as well:[6]

from itertools import tee, chain, islice, imap def add(x, y): return x + y def fibonacci(): def deferred_output(): fori in output: yield i result, c1, c2 = tee(deferred_output(), 3) paired = imap(add, c1, islice(c2, 1, None)) output =chain([0, 1], paired) return result for i in islice(fibonacci(), 20): print(i)

The definition of zipWith can be inlined, leading to this:fibs = 0 : 1 : next fibs where next (a: t@(b:_)) = (a+b):next t

This example employs a self-referential data structure. Ordinary recursion makes use of self-referential functions,but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can berewritten as follows:

Page 17: Self Reference

12 CHAPTER 4. CORECURSION

fibs = fibgen (0,1) fibgen (x,y) = x : fibgen (y,x+y)

This employs only self-referential function to construct the result. If it were used with strict list constructor it wouldbe an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces anindefinitely defined list.Corecursion need not produce an infinite object; a corecursive queue[7] is a particularly good example of this phe-nomenon. The following definition produces a breadth-first traversal of a binary tree in linear time:data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) bftrav :: Tree a b -> [Tree a b] bftrav tree = queue wherequeue = tree : gen 1 queue gen 0 p = [] gen len (Leaf _ : p) = gen (len-1) p gen len (Branch _ l r : p) = l : r : gen (len+1) p

This definition takes an initial tree and produces a list of subtrees. This list serves dual purpose as both the queue andthe result (gen len p produces its output len notches after its input back-pointer, p, along the queue). It is finite if andonly if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; thiscan safely be elided if this definition is applied only to infinite trees.Another particularly good example gives a solution to the problem of breadth-first labeling.[8] The function label visitsevery node in a binary tree in a breadth first fashion, and replaces each label with an integer, each subsequent integeris bigger than the last by one. This solution employs a self-referential data structure, and the binary tree can be finiteor infinite.label :: Tree a b -> Tree Int Int label t = t′ where (t′,ns) = label′ t (1:ns) label′ :: Tree a b -> [Int] -> (Tree Int Int,[Int]) label′ (Leaf _ ) (n:ns) = (Leaf n , n+1 : ns ) label′ (Branch _ l r) (n:ns) = (Branch n l′ r′ , n+1 : ns′′) where (l′,ns′) = label′ l ns (r′,ns′′) = label′ r ns′

An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the sameway that a paramorphism(such as a catamorphism, such as fold) is a form of recursion.The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.

4.4 History

Corecursion, referred to as circular programming, dates at least to (Bird 1984), who credits John Hughes and PhilipWadler; more general forms were developed in (Allison 1989). The original motivations included producing moreefficient algorithms (allowing 1 pass over data in some cases, instead of requiring multiple passes) and implementingclassical data structures, such as doubly linked lists and queues, in functional languages.

4.5 See also

• Bisimulation

• Coinduction

• Recursion

• Anamorphism

4.6 Notes[1] Not validating input data.

[2] More elegantly, one can start by placing the root node itself in the structure and then iterating.

[3] Post-order is to make “leaf node is base case” explicit for exposition, but the same analysis works for pre-order or in-order.

[4] Breadth-first traversal, unlike depth-first, is unambiguous, and visits a node value before processing children.

Page 18: Self Reference

4.7. REFERENCES 13

[5] Technically, one may define a breadth-first traversal on an ordered, disconnected set of trees – first the root node of eachtree, then the children of each tree in turn, then the grandchildren in turn, etc.

[6] Assume fixed branching factor (e.g., binary), or at least bounded, and balanced (infinite in every direction).

[7] First defining a tree class, say via:class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def__str__(self): return str(self.value)and initializing a tree, say via:t = Tree(1, Tree(2, Tree(4), Tree(5)), Tree(3, Tree(6), Tree(7)))In this example nodes are labeled in breadth-first order: 1 2 3 4 5 6 7

[8] Intuitively, the function iterates over subtrees (possibly empty), then once these are finished, all that is left is the node itself,whose value is then returned; this corresponds to treating a leaf node as basic.

[9] Here the argument (and loop variable) is considered as a whole, possible infinite tree, represented by (identified with) itsroot node (tree = root node), rather than as a potential leaf node, hence the choice of variable name.

4.7 References[1] Barwise and Moss 1996.

[2] Moss and Danner 1997.

[3] Smyth and Plotkin 1982.

[4] Gibbons and Hutton 2005.

[5] Doets and van Eijck 2004.

[6] Hettinger 2009.

[7] Allison 1989; Smith 2009.

[8] Jones and Gibbons 1992.

• Bird, Richard Simpson (1984). “Using circular programs to eliminate multiple traversals of data”. Acta Infor-matica 21 (3): 239–250. doi:10.1007/BF00264249.

• Lloyd Allison (April 1989). “Circular Programs and Self-Referential Structures”. Software Practice and Ex-perience 19 (2): 99–109. doi:10.1002/spe.4380190202.

• Geraint Jones and Jeremy Gibbons (1992). Linear-time breadth-first tree algorithms: An exercise in the arith-metic of folds and zips (Technical report). Dept of Computer Science, University of Auckland.

• Jon Barwise and Lawrence S Moss (June 1996). Vicious Circles. Center for the Study of Language andInformation. ISBN 978-1-57586-009-1.

• Lawrence S Moss and Norman Danner (1997). “On the Foundations of Corecursion”. Logic Journal of theIGPL 5 (2): 231–257. doi:10.1093/jigpal/5.2.231.

• Kees Doets and Jan van Eijck (May 2004). The Haskell Road to Logic, Maths, and Programming. King’sCollege Publications. ISBN 978-0-9543006-9-2.

• David Turner (2004-07-28). “Total Functional Programming”. Journal of Universal Computer Science 10 (7):751–768. doi:10.3217/jucs-010-07-0751.

• Jeremy Gibbons and Graham Hutton (April 2005). “Proof methods for corecursive programs”. FundamentaInformaticae Special Issue on Program Transformation 66 (4): 353–366.

• Leon P Smith (2009-07-29), “Lloyd Allison’s Corecursive Queues: Why Continuations Matter”, The MonadReader (14): 37–68

• Raymond Hettinger (2009-11-19). “Recipe 576961: Technique for cyclical iteration”.

• M. B. Smyth and G. D. Plotkin (1982). “The Category-Theoretic Solution of Recursive Domain Equations”.SIAM Journal on Computing 11 (4): 761–783. doi:10.1137/0211062.

Page 19: Self Reference

Chapter 5

Fumblerules

A fumblerule is a rule of language or linguistic style, humorously written in such a way that it breaks this rule.[1]Fumblerules are a form of self-reference.The science editor George L. Trigg published a list of such rules in 1979.[2] The term “Fumblerules” was coined ina list of such rules compiled by William Safire on Sunday, 4 November 1979,[3][4] in his column “On Language” inthe New York Times. Safire later authored a book titled A Lighthearted Guide to Grammar and Good Usage, whichwas reprinted in 2005 as How Not To Write: The Essential Misrules of Grammar.

5.1 Examples

• “Never use no double negatives.”

• “Eschew obfuscation.”

• “Prepositions are not words to end a sentence with.”

• “Avoid clichés like the plague.”

• “The passive voice should never be employed.”

• “You should not use a big word when a diminutive one would suffice.”

• “It is bad to carelessly split infinitives.”

• “No sentence fragments.”

• "Parentheses are (almost always) unnecessary.”

5.2 See also

• Muphry’s law

5.3 References[1] Dennis Joseph Enright (1983). A Mania for Sentences. Chatto & Windus/Hogarth Press.

[2] Physical Review Letters 42 (12), pp. 747–748 (19 March 1979)

[3] alt.usage.english.org’s Humorous Rules for Writing

[4] Safire, William (1979-11-04). “On Language; The Fumblerules of Grammar”. New York Times. p. SM4.

14

Page 20: Self Reference

5.4. EXTERNAL LINKS 15

5.4 External links• The Fumblerules of Grammar by William Safire

• Fumblerules entry on the alt.usage.english FAQ

Page 21: Self Reference

Chapter 6

Hofstadter’s law

Hofstadter’s law is a self-referential time-related adage, coined by Douglas Hofstadter and named after him.

Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofs-tadter’s Law.

— Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid [1]

Hofstadter’s law was a part of Douglas Hofstadter’s 1979 book Gödel, Escher, Bach: An Eternal Golden Braid. Thelaw is a statement regarding the difficulty of accurately estimating the time it will take to complete tasks of substantialcomplexity.[2] It is often cited amongst programmers, especially in discussions of techniques to improve productivity,such as The Mythical Man-Month or extreme programming.[3] The recursive nature of the law is a reflection of thewidely experienced difficulty of estimating complex tasks despite all best efforts, including knowing that the task iscomplex.The law was initially introduced in connection with a discussion of chess-playing computers, where top-level playerswere continually beating machines, even though the machines outweighed the players in recursive analysis. Theintuition was that the players were able to focus on particular positions instead of following every possible line of playto its conclusion. Hofstadter wrote: “In the early days of computer chess, people used to estimate that it would beten years until a computer (or program) was world champion. But after ten years had passed, it seemed that the daya computer would become world champion was still more than ten years away”.[4][5] He then suggests that this was,“just one more piece of evidence for the rather recursive Hofstadter’s Law.”[6]

6.1 See also• List of eponymous laws

• Ninety-ninety rule

• Optimism bias

• Parkinson’s law

• Planning fallacy

• Reference class forecasting

• Student syndrome

• Lindy Effect

6.2 References[1] Gödel, Escher, Bach: An Eternal Golden Braid. 20th anniversary ed., 1999, p. 152. ISBN 0-465-02656-7.

16

Page 22: Self Reference

6.2. REFERENCES 17

[2] Waters, Donald J.; Commission on Preservation and Access (1992). Electronic technologies and preservation. Commissionon Preservation and Access. Retrieved June 8, 2011.

[3] David M. Goldschmidt (October 3, 1983). “The trials and tribulations of a cottage industrialist”. InfoWorld (InfoWorldMedia Group, Inc.) 5 (40): 16. Retrieved June 8, 2011.

[4] Gödel, Escher, Bach: An Eternal Golden Braid. 20th anniversary ed., 1999, p. 152. ISBN 0-465-02656-7

[5] Rawson, Hugh (2002). Unwritten Laws: The Unofficial Rules of Life as Handed Down by Murphy and Other Sages. BookSales. p. 115. Retrieved June 8, 2011.

[6] “Hofstadter’s Law | Unwritten Laws of Life”. Archived from the original on August 26, 2011. Retrieved August 9, 2014.

Page 23: Self Reference

Chapter 7

I (pronoun)

This article is about the English personal pronoun. For other uses, see I (disambiguation).

The pronoun I /aɪ/ is the first-person singular nominative case personal pronoun in Modern English. It is used torefer to one’s self and is capitalized, although other pronouns, such as he or she, are not capitalized. In AustralianEnglish, British English and Irish English, me can refer to someone’s possessions (see archaic and non-standard formsof English personal pronouns).

7.1 Etymology

Further information: Old English pronouns, Proto-Germanic pronouns and Proto-Indo-European pronouns

English I originates from Old English (OE) ic. Its predecessor ic had in turn originated from the continuation ofProto-Germanic ik, and ek; ek was attested in the Elder Futhark inscriptions (in some cases notably showing thevariant eka; see also ek erilaz). Linguists assume ik to have developed from the unstressed variant of ek. Variants ofic were used in various English dialects up until the 1600s.[1]

Germanic cognates are: Old Frisian ik, Old Norse ek (Danish, Norwegian jeg, Swedish jag, Icelandic ég), Old HighGerman ih (German ich) and Gothic ik and in Dutch also “ik”.The Proto-Germanic root came, in turn, from the Proto Indo-European language (PIE). The reconstructed PIE pro-noun is *egō, egóm, with cognates including Sanskrit aham, Hittite uk, Latin ego, Greek ἐγώ egō and Old Slavonicazъ, Alviri-Vidari (an Iranian language) (az)The oblique forms are formed from a stem *me- (English me), the plural from *wei- (English we), the oblique pluralfrom *ns- (English us).

7.2 Capitalization

There is no known record of a definitive explanation from around the early period of this capitalisation practice.It is hypothesised that the capitalization could have been prompted and spread as a result of one or more of thefollowing:

• changes specifically in the pronunciation of letters (introduction of long vowel sounds in Middle English, etc.)

• other linguistic considerations (demarcation of a single-letter word, setting apart a pronounwhich is significantlydifferent from others in English, etc.)

• problems with legibility of the minuscule “i”

• sociolinguistic factors (establishment of English as the official language, solidification of English identity, etc.)

18

Page 24: Self Reference

7.3. ME AS A SUBJECT PRONOUN 19

Other considerations include:Capitalization was already employed with pronouns in other languages at that time. It was used to denote respect ofthe addresser or position of the addressed.There is also the possibility that the first instances of capitalisation may have been happenstance. Either throughchance or a sense of correctness, in the practice or the delivery, the capitalisation may have spread.A folk legend tells of a printmaker who was convinced by the Faustian demon Mephistopheles to begin the practiceof capitalizing “I”.[2]

There are failings of many of these explanations based on other words, but there is the possibility that the factors orfactor that prompted and/or spread this change may not have been applied to all similar words or instances.

7.3 Me as a subject pronoun

Further information: Objective case § English

According to traditional grammar, the objective case appears only as the direct object of a verb, the indirect object ofa verb, or the object of a preposition. But there are examples which meet with varying degree of acceptance whichviolate this rule.

• There are exceptions which appear with several pronouns:

• it is me, as well as it is us/him/her/them.• Me and Bob are (the compound subject with a pronoun). This can be contrasted with the use of thesubjective case as the object in to Bob and I

• as me and than him (as if as and than were being treated as prepositions rather than as conjunctions)• These exceptions have their own exception: the objective case whom is never so used.

• There are idiosyncratic uses generally restricted to the first person singular pronoun:

• dear me (dear us is also used, but rarely)• me too and me neither (us too and us neither are rarely used)

• In Caribbean dialect, me can be the subjective case form

7.4 Older versions[1] Dative case, indirect object

[2] Accusative case, direct object

Many other variations are noted in Middle English sources due to difference in spellings and pronunciations. SeeFrancis Henry Stratmann (1891), A Middle-English dictionary (A Middle English dictionary ed.), [London]: OxfordUniversity Press and A Concise Dictionary of Middle English from A.D. 1150 TO 1580, A. L. Mayhew, Walter W.Skeat, Oxford, Clarendon Press, 1888.

[1] The genitives my, mine, thy, and thine are used as adjectives before a noun, or as possessive pronouns without a noun.All four forms are used as adjectives: mine and thine are used before nouns beginning in a vowel sound, or before nounsbeginning in the letter h, which was usually silent (e.g. thine eyes and mine heart, which was pronounced as mine art) andmy and thy before consonants (thy mother, my love). However, only mine and thine are used as possessive pronouns, as init is thine and they were mine (not *they were my).

[2] From the early Early Modern English period up until the 17th century, his was the possessive of the third person neuter itas well as of the third person masculine he. Genitive “it” appears once in the 1611 King James Bible (Leviticus 25:5) asgroweth of it owne accord.

Page 25: Self Reference

20 CHAPTER 7. I (PRONOUN)

7.5 See also• English grammar

• English personal pronouns

• Grammar

• Personal pronouns

• Pronouns

• Self

7.6 References[1]

[2] Volume 1 of Historia von D. Johanni Fausten, dem weyt beschreyten Zauberer unnd Schwarzkünstler, Fridericus Schotus,Spies.

"Etymology of I". etymonline.com. Douglas Harper, n.d. Web. 12 Dec. 2010."Etymology of Me". etymonline.com. Douglas Harper, n.d. Web. 12 Dec. 2010.Halleck, Elaine (editor). "Sum: Pronoun “I” again". LINGUIST List 9.253., n.p., Web. 20 Feb. 1998.Jacobsen, Martin (editor). "Sum: Pronoun 'I'". LINGUIST List 9.253., n.p., Web. 20 Feb. 1998.Mahoney, Nicole. "[http://www.nsf.gov/news/special_reports/linguistics/change.jsp> Language Change]". nsf.gov.n.p. 12 July 2008. Web. 21 Dec. 2010Wells, Edward. "Further Elucidation on the Capitalization of 'I' in English". (a paper in progress). Lingforum.com.n.p., Web. 25 Dec. 2010

7.7 Further reading• Howe, Stephen (1996). The personal pronouns in the Germanic languages: a study of personal pronoun mor-phology and change in the Germanic languages from the first records to the present day. Studia linguisticaGermanica 43. Walter de Gruyter. ISBN 3-11-014636-3.

• Gaynesford, M. de (2006). I: The Meaning of the First Person Term. Oxford: Oxford University Press. ISBN0-19-928782-1..

• Wales, Katie (1996). Personal pronouns in present-day English. Studies in English language. CambridgeUniversity Press. ISBN 0-521-47102-8.

Wren and Martin English Grammar book for High-schoolers.

7.8 External links• Video: Saul Kripke, The First Person, January 2006 — an analytic philosophical perspective. 70 minutes,hosted by Google video. [Kripke is sick with bronchitis and doesn't always speak into the microphone.]

Page 26: Self Reference

Chapter 8

Impredicativity

In mathematics and logic, a self-referencing definition is called impredicative. More precisely, a definition is said tobe impredicative if it invokes (mentions or quantifies over) the set being defined, or (more commonly) another setwhich contains the thing being defined.The opposite of impredicativity is predicativity, which essentially entails building stratified (or ramified) theorieswhere quantification over lower levels results in variables of some new type, distinguished from the lower types thatthe variable ranges over. A prototypical example is intuitionistic type theory, which retains ramification but discardsimpredicativity.Russell’s paradox is a famous example of an impredicative construction, namely the set of all sets which do not containthemselves. The paradox is whether such a set contains itself or not — if it does then by definition it should not, andif it does not then by definition it should.The greatest lower bound of a set X, glb(X), also has an impredicative definition; y = glb(X) if and only if for allelements x of X, y is less than or equal to x, and any z less than or equal to all elements of X is less than or equal to y.But this definition also quantifies over the set (potentially infinite, depending on the order in question) whose membersare the lower bounds of X, one of which being the glb itself. Hence predicativism would reject this definition.[1]

8.1 History

The vicious circle principle was suggested by Henri Poincaré (1905-6, 1908)[2] and Bertrand Russell in the wake ofthe paradoxes as a requirement on legitimate set specifications. Sets which do not meet the requirement are calledimpredicative.The first modern paradox appeared with Cesare Burali-Forti's 1897 A question on transfinite numbers[3] and wouldbecome known as the Burali-Forti paradox. Cantor had apparently discovered the same paradox in his (Cantor’s)“naive” set theory and this become known as Cantor’s paradox. Russell’s awareness of the problem originated in June1901[4] with his reading of Frege's treatise of mathematical logic, his 1879 Begriffsschrift; the offending sentence inFrege is the following:

“On the other hand, it may be also be that the argument is determinate and the function indeterminate”.[5]

In other words, given f(a) the function f is the variable and a is the invariant part. So why not substitute the valuef(a) for f itself? Russell promptly wrote Frege a letter pointing out that:

“You state ... that a function too, can act as the indeterminate element. This I formerly believed, butnow this view seems doubtful to me because of the following contradiction. Let w be the predicate: tobe a predicate that cannot be predicated of itself. Can w be predicated of itself? From each answerits opposite follows. There we must conclude that w is not a predicate. Likewise there is no class (as atotality) of those classes which each taken as a totality, do not belong to themselves. From this I concludethat under certain circumstances a definable collection does not form a totality”.[6]

Frege promptly wrote back to Russell acknowledging the problem:

21

Page 27: Self Reference

22 CHAPTER 8. IMPREDICATIVITY

“Your discovery of the contradiction caused me the greatest surprise and, I would almost say, conster-nation, since it has shaken the basis on which I intended to build arithmetic”.[7]

While the problem had adverse personal consequences for both men (both had works at the printers that had to beemended), van Heijenoort observes that “The paradox shook the logicians’ world, and the rumbles are still felt today.... Russell’s paradox, which makes use of the bare notions of set and element, falls squarely in the field of logic.The paradox was first published by Russell in The principles of mathematics (1903) and is discussed there in greatdetail...”.[8] Russell, after 6 years of false starts, would eventually answer the matter with his 1908 theory of typesby “propounding his axiom of reducibility. It says that any function is coextensive with what he calls a predicativefunction: a function in which the types of apparent variables run no higher than the types of the arguments”.[9] Butthis “axiom” was met with resistance from all quarters.The rejection of impredicatively defined mathematical objects (while accepting the natural numbers as classicallyunderstood) leads to the position in the philosophy of mathematics known as predicativism, advocated by HenriPoincaré and Hermann Weyl in his Das Kontinuum. Poincaré and Weyl argued that impredicative definitions areproblematic only when one or more underlying sets are infinite.Ernst Zermelo in his 1908 A new proof of the possibility of a well-ordering presents an entire section “b. Objectionconcerning nonpredicative definition" where he argued against “Poincaré (1906, p. 307) [who states that] a definitionis 'predicative' and logically admissible only if it excludes all objects that are dependent upon the notion defined, thatis, that can in any way be determined by it”.[10] He gives two examples of impredicative definitions -- (i) the notionof Dedekind chains and (ii) “in analysis wherever the maximum or minimum of a previously defined “completed”set of numbers Z is used for further inferences. This happens, for example, in the well-known Cauchy proof of thefundamental theorem of algebra, and up to now it has not occurred to anyone to regard this as something illogical”.[11]He ends his section with the following observation: “A definition may very well rely upon notions that are equivalentto the one being defined; indeed, in every definition definiens and definiendum are equivalent notions, and the strictobservance of Poincaré's demand would make every definition, hence all of science, impossible”.[12]

Zermelo’s example of minimum and maximum of a previously defined “completed” set of numbers reappears inKleene 1952:42-42 where Kleene uses the example of Least upper bound in his discussion of impredicative defi-nitions; Kleene does not resolve this problem. In the next paragraphs he discusses Weyl’s attempt in his 1918 DasKontinuum (The continuum) to eliminate impredicative definitions and his failure to retain the “theorem that an ar-bitrary non-empty set M of real numbers having an upper bound has a least upper bound (Cf. also Weyl 1919.)"[13]

Ramsey argued that “impredicative” definitions can be harmless: for instance, the definition of “Tallest person in theroom” is impredicative, since it depends on a set of things of which it is an element, namely the set of all persons inthe room. Concerning mathematics, an example of an impredicative definition is the smallest number in a set, whichis formally defined as: y = min(X) if and only if for all elements x of X, y is less than or equal to x, and y is in X.Burgess (2005) discusses predicative and impredicative theories at some length, in the context of Frege's logic, Peanoarithmetic, second order arithmetic, and axiomatic set theory.

8.2 See also

• Gödel, Escher, Bach

• Gödel, Escher, Bach#Themes

• Impredicative polymorphism

• Richard’s paradox

8.3 Notes[1] Kleene 1952:42-43

[2] dates derived from Kleene 1952:42

[3] van Heijenoort’s commentary before Burali-Forti’s (1897) A question on transfinite numbers in van Heijenoort 1967:104;see also his commentary before Georg Cantor’s (1899) Letter to Dedekind in van Heijenoort 1967:113

Page 28: Self Reference

8.4. REFERENCES 23

[4] Commentary by van Heijenoort before Bertrand Russell’s Lettern to Frege in van Heijenoort 1967:124

[5] Gottlob Frege (1879) Begriffsschrift in van Heijenoort 1967:23

[6] Bertrand Russell’s 1902 Letter to Frege in van Heijenoort 1967:124-125

[7] Gottlob Frege’s (1902) Letter to Russell in van Hiejenoort 1967:127

[8] van Heijenoort’s commentary before Bertrand Russell’s (1902) Letter to Frege 1967:124

[9] Willard V. Quine’s commentary before Bertrand Russell’s 1908 Mathematical logic as based on the theory of types

[10] van Heijenoort 1967:190

[11] van Heijenoort 1967:190-191

[12] van Heijenoort 1967:191

[13] Kleene 1952:43

8.4 References• Predicative and Impredicative Definitions entry in the Internet Encyclopedia of Philosophy

• PlanetMath article on predicativism

• John Burgess, 2005. Fixing Frege. Princeton Univ. Press.

• Solomon Feferman, 2005, "Predicativity" in The Oxford Handbook of Philosophy of Mathematics and Logic.Oxford University Press: 590–624.

• Stephen C. Kleene 1952 (1971 edition), Introduction to Metamathematics, North-Holland Publishing Com-pany, Amsterdam NY, ISBN 0-7204-2103-9. In particular cf his §11 The Paradoxes (pp. 36–40) and §12First inferences from the paradoxes IMPREDICATIVE DEFINITION (p. 42). He states that his 6 orso (famous) examples of paradoxes (antinomies) are all examples of impredicative definition, and says thatPoincaré (1905–6, 1908) and Russell (1906, 1910) “enunciated the cause of the paradoxes to lie in these im-predicative definitions” (p. 42), however, “parts of mathematics we want to retain, particularly analysis, alsocontain impredicative definitions.” (ibid). Weyl in his 1918 (“Das Kontinuum”) attempted to derive as muchof analysis as was possible without the use of impredicative definitions, “but not the theorem that an arbitrarynon-empty set M of real numbers having an upper bound has a least upper bound (CF. also Weyl 1919)" (p.43).

• Hans Reichenbach 1947, Elements of Symbolic Logic, Dover Publications, Inc., NY, ISBN 0-486-24004-5.Cf his §40. The antinomies and the theory of types (pp. 218 — wherein he demonstrates how to createantinomies, including the definition of impredicable itself (“Is the definition of “impredicable” impredicable?").He claims to show methods for eliminating the “paradoxes of syntax” (“logical paradoxes”) — by use of thetheory of types — and “the paradoxes of semantics” — by the use of metalanguage (his “theory of levels oflanguage”). He attributes the suggestion of this notion to Russell and more concretely to Ramsey.

• Jean van Heijenoort 1967, third printing 1976, From Frege to Gödel: A Source Book in Mathematical Logic,1879-1931, Harvard University Press, Cambridge MA, ISBN 0-674-32449-8 (pbk.)

Page 29: Self Reference

Chapter 9

Indirect self-reference

Indirect self-reference describes an object referring to itself indirectly.For example, define the function f such that f(x) = x(x). Any function passed as an argument to f is invoked withitself as an argument, and thus in any use of that argument is indirectly referring to itself.This example is similar to the Scheme expression "((lambda(x)(x x)) (lambda(x)(x x)))", which is expanded to itselfby beta reduction, and so its evaluation loops indefinitely despite the lack of explicit looping constructs. An equivalentexample can be formulated in lambda calculus.Indirect self-reference is special in that its self-referential quality is not explicit, as it is in the sentence “this sentenceis false.” The phrase “this sentence” refers directly to the sentence as a whole. An indirectly self-referential sentencewould replace the phrase “this sentence” with an expression that effectively still referred to the sentence, but did notuse the pronoun “this.”An example will help to explain this. Suppose we define the quine of a phrase to be the quotation of the phrasefollowed by the phrase itself. So, the quine of:is a sentence fragmentwould be:“is a sentence fragment” is a sentence fragmentwhich, incidentally, is a true statement.Now consider the sentence:“when quined, makes quite a statement” when quined, makes quite a statementThe quotation here, plus the phrase “when quined,” indirectly refers to the entire sentence. The importance of thisfact is that the remainder of the sentence, the phrase “makes quite a statement,” can now make a statement about thesentence as a whole. If we had used a pronoun for this, we could have written something like “this sentence makesquite a statement.”It seems silly to go through this trouble when pronouns will suffice (and when they make more sense to the casualreader), but in systems of mathematical logic, there is generally no analog of the pronoun. It is somewhat surprising,in fact, that self-reference can be achieved at all in these systems.Upon closer inspection, it can be seen that in fact, the Scheme example above uses a quine, and f(x) is actually thequine function itself.Indirect self-reference was studied in great depth byW. V. Quine (after whom the operation above is named), and oc-cupies a central place in the proof of Gödel’s incompleteness theorem. Among the paradoxical statements developedby Quine is the following:“yields a false statement when preceded by its quotation” yields a false statement when preceded by its quotation

9.1 See also• Fixed point

24

Page 31: Self Reference

Chapter 10

Liar paradox in early Islamic tradition

Many early Islamic philosophers and logicians discussed the liar paradox. Their work on the subject began in the 10thcentury and continued to Athīr al-Dīn al-Abharī and Nasir al-Din al-Tusi of the middle 13th century[1] and beyond.[2]Although the Liar paradox has been well known in Greek and Latin traditions, the works of Arabic scholars haveonly recently been translated into English.[1]

Each group of early Islamic philosophers discussed different problems presented by the paradox. They pioneeredunique solutions that were not influenced by Western ideas.

10.1 Athīr and the Liar paradox

Athīr al-Dīn Mufaḍḍal (b. ʿUmar Abharī, d. 663/1264) was a Persian philosopher, astronomer and mathematicianfrom the city of Abhar in Persia. There is some speculation that his works on the Liar paradox could have been knownto Western logicians, and in particular to Thomas Bradwardine.[3]

He analyzed the Liar sentence as follows:

One of the difficult fallacies is the conjunction of the two contradictories (Jamʿal-naqīḍyan) whensomeone says, “All that I say at this moment is false”. This sentence (qawl) is either true or false. If it istrue, then it must be true and false. And if it is not true, then it is necessary that one of his sentences atthis moment is true, as long as he utters something. But, he says nothing at this moment other than thissentence. Thus, this sentence is necessarily true and false.[4]

In other words, Athīr says that if the Liar sentence is false, which means that the Liar falsely declares that all he saysat the moment is false, then the Liar sentence is true; and, if the Liar sentence is true, which means that the Liartruthfully declares that all he says at the moment is false, then the Liar sentence is false. In any case, the Liar sentenceis both true and false at the same time, which is a paradox.[4]

Athīr offers the following solution for the paradox:

To solve the paradox we say: we should not concede that if it is false then one of his sentences (kalām)is true. For its being true is taken to be the conjunction of its being true and being false. Therefore itsbeing false necessitates the non-conjunction of its being true and being false. And the non-conjunctionof its being true and being false does not necessitate

its being true.[4]

According to the traditional idealization[5] that presumably was used by Athīr, the sentence as an universal propositionis false only, when “either it has a counter-instance or its subject term is empty”.[6]

• Other examples of a counter-instance include: it is false to say that all birds could fly because there are somethat could not, like for example penguins.[6]

26

Page 32: Self Reference

10.2. NASIR AL-DIN AL-TUSI ON THE LIAR PARADOX 27

• Other examples of an empty subject term include: it is false to say that all flying carpets have four corners, andnot only because some carpets are round or have three corners, but rather because there are no flying carpetsat all.[6]

The Liar sentence, however, has neither an empty subject nor counter-instance. This fact creates obstacles for Athīr’sview, who must show what is unique about the Liar sentence, and how the Liar sentence still could be only true orfalse in view of the “true” and “false” conditions set up in the universal proposition’s description. Athīr tries to solvethe paradox by applying to it the laws of negation of a conjunction and negation of a disjunction.[6]

Ahmed Alwishah, who has a Ph.D. in Islamic Philosophy and David Sanson, who has a Ph.D. in Philosophy explainthat Athīr actually claims that:(1) “It is not the case that, if the Liar Sentence is not both true and false, then it is true.”[7]

Alwishah and Sanson continue: “The general principle behind (1) is clear enough: the negation of a conjunction doesnot entail the negation of a conjunct; so from not both true and false you cannot infer not false and so true. Abharīappears to be saying that the Liar rests on an elementary scope fallacy! But, of course, Abharī is not entitled to (1).In some cases, the negation of a conjunction does entail the negation of a conjunct: 'not both P and P' for example,entails 'not P'. As a general rule, the negation of a conjunction entails the negation of each conjunct whenever theconjuncts are logically equivalent, i.e., whenever the one follows from the other and vice verse. So Abharī is entitledto (1) only if he is entitled to assume that ‘The Liar Sentence is true’ and ‘The Liar Sentence is false’ are not logicallyequivalent.”[7]

The Liar sentence is a universal proposition (The Liar says All I say ...), so “if it is (non–vacuously) false it musthave a counter–instance”.[7] But in this case scenario, when the only thing that the liar is saying is the single sentencedeclaring that what he is saying at the moment is false, the only available counter–instance is the Liar sentence itself.When staging the paradox Abharī said: “if it is not true, then it is necessary that one of his sentences at this momentis true, as long as he utters something. But, he says nothing at this moment other than this sentence. Thus, thissentence is necessarily true and false”[4] So the explanation provided by Abharī himself demonstrates that both "'TheLiar Sentence is false' and 'The Liar Sentence is true' are logically equivalent. If they are logically equivalent, then,contrary to (1), the negation of the conjunction does entail the negation of each conjunct. Abharī’s 'solution; thereforefails.”[8]

10.2 Nasir al-Din al-Tusi on the Liar paradox

Naṣīr al-Dīn al-Ṭūsī was a Persian[9] polymath[10] and prolificwriter: an astronomer, biologist, chemist, mathematician,philosopher, physician, physicist, scientist, theologian and Marja Taqleed. He adhered to the Ismaili, and subse-quently Twelver Shī‘ah Islamic belief systems.[11] The Arab scholar Ibn Khaldun (1332–1406) considered Tusi to bethe greatest of the later Persian scholars.Ṭūsī's work on the paradox begins with a discussion of the paradox and the solution offered by Abharī, with whichṬūsī disagrees. As Alwishah and Sanson point out "Ṭūsī argues that whatever fancy thing (conjunction, conditional)Abharī wants to identify as the truth condition for the Liar Sentence, it will not matter, because pace Abharī, we cangenerate the paradox without inferring, from the negation of a complex truth condition, the negation of one of itsparts. We can argue directly that its being false entails the negation of its being false, and so entails its being true.”[12]

Ṭūsī then prepares a stage for his own solution of the Liar paradox, writing that:

If a declarative sentence, by its nature, can declare-something-about anything, then it is possible thatit itself can declare-something-about another declarative sentence.[13]

He does not see a reason that could prevent a declarative sentence to declare something about another declarativesentence.[13]

With an example of two declarative sentences, (D1) “It is false” and (D2) “Zayd is sitting”, Ṭūsī explains how onedeclarative sentence (D1) can declare another declarative sentence (D2) to be false: “It is false that Zayd is sitting”.[13]There is no paradox in the above two declarative sentences because they have different subjects. To generate a paradoxa declarative sentence must declare something about itself. If (D1) falsely declares itself to be not (D1) then this falsedeclaration referencing to itself as being “false” creates a paradox.[13]

Ṭūsī writes:

Page 33: Self Reference

28 CHAPTER 10. LIAR PARADOX IN EARLY ISLAMIC TRADITION

Naṣīr al-Dīn al-Ṭūsī commemorated on an Iranian stamp on the 700th anniversary of his death

Moreover, if the first declarative sentence declares itself to be false, then [both] its being true, insofaras it is a declarative sentence, and its being false, insofar as it is that-about-which-something-is-declared,are concomitant. Thus, the following paradox can be generated: The first declarative sentence, whichis a declaration (khabar) about itself, namely that it is false, is either false or true. If it is true, then itmust be false, because it declares itself to be false. If it is false, then it must be true, because if it is saidfalsely, then it will become true, which is absurd.[14]

Page 34: Self Reference

10.3. REFERENCES 29

The above conclusions are very important to the history of Liar Paradox. Alwishah and Sanson point out: “It is hardto overemphasize how remarkable this passage is. The contemporary reader will be familiar with the idea that theLiar Paradox is a paradox of selfreference. But Ṭūsī is, as far as we know, the first person to express this idea. Thispassage has no precedent in any tradition. Ṭūsī has performed three remarkable feats in short order. First, his LiarSentence is singular: its subject is itself, and it declares itself to be false. Gone, then, is the choice between universalor particular Liar Sentence, and the associated problem of adding further assumptions to generate a genuine paradox.Second, he has characterized the paradox as one of self-reference. Third, he has identified a key assumption thatmight be responsible for generating the entire problem: the assumption that a declarative sentence, by its nature, candeclare-something-about anything.”[14]

Recognizing that, if a declarative sentence that declares itself being false, is false, this does not necessitate it beingtrue. Ṭūsī says that it would be absurd to say that this declarative sentence is true only because it is not false.[14] Ṭūsīwrites:

. . . its being false, insofar as it is a declarative sentence, does not necessitate its being true. Instead,its being false necessitates the denial of its being false, insofar as it is that-about-which-something-is-declared, and [necessitates] its being false, insofar as it is a declarative sentence. Hence, we should notconcede that, in this way, the denial of its being false necessitates its being true.[15]

Ṭūsī then interprets the definitions of “true” and “false”, in an attempt to prove that those definitions should not betaken into consideration when dealing with a declarative sentence that declares itself, as its own subject, to be false.Al-Baghdādī's definition of “truth” and “falsity” says that: “truth is an agreement with the subject, and falsity is theopposite of that”. Ṭūsī argues that this definition cannot be applied to a declarative sentence that declares its ownsubject to be false because then there are at least two opposite parts that are in disagreement with each other. Thesame subject cannot be in disagreement with itself. Therefore a self–referenced declarative sentence that declaresitself to be false is neither false nor true, and truth/falsity definitions are not applicable to those sentences.[16]

the result of a judgment that applies truth and falsity to something to which they in no way apply,and to apply them in any way is the misuse of a predicate.[16]

Ṭūsī stopped short from offering a solution for the Liar sentences discussed by Āmidī “All that I say at this momentis false”. This sentence presents a different case scenario because it can be interpreted as declaring something aboutitself, and something about another sentence. The solution for this paradox is absent from Ṭūsī's papers.[17]

10.3 References[1] Alwishah & Sanson 2009, p. 97

[2] Alwishah & Sanson 2009, p. 123

[3] Alwishah & Sanson 2009, p. 113

[4] Alwishah & Sanson 2009, p. 107

[5] Parsons, Terence (1October 2006), “The Traditional Square ofOpposition”, in Zalta, EdwardN,The Stanford Encyclopediaof Philosophy, Stanford, CA: Center for the Study of Language and Information, Stanford University

[6] Alwishah & Sanson 2009, p. 108

[7] Alwishah & Sanson 2009, p. 110

[8] Alwishah & Sanson 2009, p. 111

[9] “Tusi, Nasir al-Din al-". Encyclopædia Britannica Online. Encyclopædia Britannica. 27 December 2007.

[10] Nasr 2006, p. 199

[11] Ṭūsī 2005, p. 2

Page 35: Self Reference

30 CHAPTER 10. LIAR PARADOX IN EARLY ISLAMIC TRADITION

[12] Alwishah & Sanson 2009, p. 114

[13] Alwishah & Sanson 2009, p. 115

[14] Alwishah & Sanson 2009, p. 116

[15] Alwishah & Sanson 2009, p. 117

[16] Alwishah & Sanson 2009, p. 121

[17] Alwishah & Sanson 2009, p. 122

10.4 Bibliography• Alwishah, Ahmed; Sanson, David (2009), “The Early Arabic Liar:The Liar Paradox in the Islamic Worldfrom the Mid-Ninth to the Mid-Thirteenth Centuries CE” (PDF), Vivarium (Leiden: Brill) 47 (1): 97–127,doi:10.1163/156853408X345909, ISSN 0042-7543

• Nasr, Seyyed Hossein (2006), Islamic Philosophy from Its Origin to the Present: Philosophy in the Land ofProphecy, Albany: SUNY Press, p. 380, ISBN 0-7914-6799-6

• Ṭūsī, Naṣīr al-Dīn Muḥammad ibnMuḥammad; Badakchani, S. J. (2005), Paradise of Submission: A MedievalTreatise on Ismaili Thought, Ismaili Texts and Translations 5, London: I.B. Tauris in association with Instituteof Ismaili Studies, p. 300, ISBN 1-86064-436-8

Page 36: Self Reference

Chapter 11

Non-well-founded set theory

Non-well-founded set theories are variants of axiomatic set theory that allow sets to contain themselves and other-wise violate the rule of well-foundedness. In non-well-founded set theories, the foundation axiom of ZFC is replacedby axioms implying its negation.The study of non-well-founded sets was initiated by Dmitry Mirimanoff in a series of papers between 1917 and1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regardwell-foundedness as an axiom. Although a number of axiomatic systems of non-well-founded sets were proposedafterwards, they did not find much in the way of applications until Peter Aczel’s hyperset theory in 1988.[1]

The theory of non-well-founded sets has been applied in the logical modelling of non-terminating computational pro-cesses in computer science (process algebra and final semantics), linguistics and natural language semantics (situationtheory), philosophy (work on the Liar Paradox), and in a different setting, non-standard analysis.[2]

11.1 Details

In 1917, Dmitry Mirimanoff introduced[3] the concept of well-foundedness of a set:

A set, x0, is well-founded iff it has no infinite descending membership sequence:

∈ x2 ∈ x1 ∈ x0.

In ZFC, there is no infinite descending ∈-sequence by the axiom of regularity. In fact, the axiom of regularity is oftencalled the foundation axiom since it can be proved within ZFC− (that is, ZFC without the axiom of regularity) thatwell-foundedness implies regularity. In variants of ZFC without the axiom of regularity, the possibility of non-well-founded sets with set-like ∈-chains arises. For example, a set A such that A ∈ A is non-well-founded.AlthoughMirimanoff also introduced a notion of isomorphism between possibly non-well-founded sets, he consideredneither an axiom of foundation nor of anti-foundation.[4] In 1926 Paul Finsler introduced the first axiom that allowednon-well-founded sets. After Zermelo adopted Foundation into his own system in 1930 (from previous work of vonNeumann 1925–1929) interest in non-well-founded sets waned for decades.[5] An early non-well-founded set theorywas Willard Van Orman Quine’s New Foundations, although it is not merely ZF with a replacement for Foundation.Several proofs of the independence of Foundation from the rest of ZF were published in 1950s particularly by PaulBernays (1954), following an announcement of the result in earlier paper of his from 1941, and by Ernst Specker whogave a different proof in his Habilitationsschrift of 1951, proof which was published in 1957. Then in 1957 Rieger’stheorem was published, which gave a general method for such proof to be carried out, rekindling some interest innon-well-founded axiomatic systems.[6] The next axiom proposal came in a 1960 congress talk of Dana Scott (neverpublished as a paper), proposing an alternative axiom now called SAFA.[7] Another axiom proposed in the late 1960swas Maurice Boffa’s axiom of superuniversality, described by Aczel as the highpoint of research of its decade.[8]Boffa’s idea was to make foundation fail as badly as it can (or rather, as extensionality permits): Boffa’s axiom impliesthat every extensional set-like relation is isomorphic to the elementhood predicate on a transitive class.A more recent approach to non-well-founded set theory, pioneered by M. Forti and F. Honsell in the 1980s, borrowsfrom computer science the concept of a bisimulation. Bisimilar sets are considered indistinguishable and thus equal,

31

Page 37: Self Reference

32 CHAPTER 11. NON-WELL-FOUNDED SET THEORY

which leads to a strengthening of the axiom of extensionality. In this context, axioms contradicting the axiom ofregularity are known as anti-foundation axioms, and a set that is not necessarily well-founded is called a hyperset.Four mutually independent anti-foundation axioms are well-known, sometimes abbreviated by the first letter in thefollowing list:

1. AFA (‘Anti-Foundation Axiom’) – due to M. Forti and F. Honsell (this is also known as Aczel’s anti-foundationaxiom);

2. SAFA (‘Scott’s AFA’) – due to Dana Scott,

3. FAFA (‘Finsler’s AFA’) – due to Paul Finsler,

4. BAFA (‘Boffa’s AFA’) – due to Maurice Boffa.

They essentially correspond to four different notions of equality for non-well-founded sets. The first of these, AFA,is based on accessible pointed graphs (apg) and states that two hypersets are equal if and only if they can be picturedby the same apg. Within this framework, it can be shown that the so-called Quine atom, formally defined by Q={Q},exists and is unique.Each of the axioms given above extends the universe of the previous, so that: V ⊆ A ⊆ S ⊆ F ⊆ B. In the Boffauniverse, the distinct Quine atoms form a proper class.[9]

It is worth emphasizing that hyperset theory is an extension of classical set theory rather than a replacement: thewell-founded sets within a hyperset domain conform to classical set theory.

11.2 Applications

Aczel’s hypersets were extensively used by Jon Barwise and John Etchemendy in their 1987 book The Liar, on theliar’s paradox; The book is also good introduction to the topic of non-well-founded sets.Boffa’s superuniversality axiom has found application as a basis for axiomatic nonstandard analysis.[10]

11.3 See also• Alternative set theory

• Universal set

• Turtles all the way down

11.4 Notes[1] Pakkan and Akman (1994, section link); Rathjen (2004); Sangiorgi (2011) pp. 17–19 and 26

[2] Ballard & Hrbáček (1992).

[3] Levy (2002), p. 68; Hallett (1986), p. 186; Aczel (1988) p. 105 all citing Mirimanoff (1917)

[4] Aczel (1988) p. 105

[5] Aczel (1988), p. 107.

[6] Aczel (1988), pp. 107–108.

[7] Aczel (1988), pp. 108–109.

[8] Aczel (1988), p. 110.

[9] Nitta,Okada,Tsouvaras (2003)

[10] Kanovei & Reeken (2004), p. 303.

Page 38: Self Reference

11.5. REFERENCES 33

11.5 References• Aczel, Peter (1988), Non-well-founded sets (PDF), CSLI Lecture Notes 14, Stanford, CA: Stanford University,Center for the Study of Language and Information, pp. xx+137, ISBN 0-937073-22-9, MR 0940014.

• Ballard, David; Hrbáček, Karel (1992), “Standard foundations for nonstandard analysis”, Journal of SymbolicLogic 57 (2): 741–748, doi:10.2307/2275304, JSTOR 2275304.

• Hallett, Michael (1986), Cantorian set theory and limitation of size, Oxford University Press.

• Levy, Azriel (2002), Basic set theory, Dover Publications.

• Finsler, P., Über die Grundlagen der Mengenlehre, I. Math. Zeitschrift, 25 (1926), 683–713; translation inFinsler, Paul; Booth, David (1996). Finsler Set Theory: Platonism and Circularity : Translation of Paul Finsler’sPapers on Set Theory with Introductory Comments. Springer. ISBN 978-3-7643-5400-8.

• Boffa. M., “Les enesembles extraordinaires.” Bulletin de la Societe Mathematique de Belgique. XX:3–15,1968

• Boffa, M., Forcing et négation de l’axiome de Fondement, Memoire Acad. Sci. Belg. tome XL, fasc. 7,(1972).

• Scott, Dana. “A different kind of model for set theory.” Unpublished paper, talk given at the 1960 StanfordCongress of Logic, Methodology and Philosophy of Science. 1960.

• Mirimanoff, D. (1917), “Les antinomies de Russell et de Burali-Forti et le probleme fondamental de la theoriedes ensembles”, L’Enseignement Mathématique 19: 37–52.

• Nitta; Okada; Tzouvaras (2003), Classification of non-well-founded sets and an application (PDF)

• M. Rathjen (2004), “Predicativity, Circularity, and Anti-Foundation”, in Link, Godehard, One Hundred Yearsof Russell ́s Paradox: Mathematics, Logic, Philosophy (PDF), Walter de Gruyter, ISBN 978-3-11-019968-0

• Pakkan, M. J.; Akman, V. (1994–1995), “Issues in commonsense set theory”, Artificial Intelligence Review 8(4): 279–308, doi:10.1007/BF00849061

• Barwise, Jon; Moss, Lawrence S. (1996), Vicious circles. On the mathematics of non-wellfounded phenomena,CSLI Lecture Notes 60, CSLI Publications, ISBN 1-57586-009-0

• Sangiorgi, Davide (2011), “Origins of bisimulation and coinduction”, in Sangiorgi, Davide; Rutten, Jan, Ad-vanced Topics in Bisimulation and Coinduction, Cambridge University Press, ISBN 978-1-107-00497-9

• Kanovei, Vladimir; Reeken, Michael (2004), Nonstandard Analysis, Axiomatically, Springer, ISBN 978-3-540-22243-9

• Barwise, Jon; Etchemendy, John (1987), The Liar, Oxford University Press

• Devlin, Keith Devlin (1993), The Joy of Sets: Fundamentals of Contemporary Set Theory (2nd ed.), Springer,ISBN 978-0-387-94094-6, §7. Non-Well-Founded Set Theory

11.6 Further reading• Moss, Lawrence S. “Non-wellfounded Set Theory”. Stanford Encyclopedia of Philosophy.

11.7 External links• Metamath page on the axiom of Regularity. Scroll to the bottom to see how few Metamath theorems invokethis axiom.

Page 39: Self Reference

Chapter 12

Recursion

For other uses, see Recursion (disambiguation).

Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors areexactly parallel with each other, the nested images that occur are a form of infinite recursion. The term has a varietyof meanings specific to a variety of disciplines ranging from linguistics to logic. The most common application ofrecursion is in mathematics and computer science, in which it refers to a method of defining functions in which thefunction being defined is applied within its own definition. Specifically, this defines an infinite number of instances(function values), using a finite expression that for some instances may refer to other instances, but in such a waythat no loop or infinite chain of references can occur. The term is also used more generally to describe a process ofrepeating objects in a self-similar way.

12.1 Formal definitions

In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can bedefined by two properties:

1. A simple base case (or cases)—a terminating scenario that does not use recursion to produce an answer

2. A set of rules that reduce all other cases toward the base case

For example, the following is a recursive definition of a person’s ancestors:

• One’s parents are one’s ancestors (base case).

• The ancestors of one’s ancestors are also one’s ancestors (recursion step).

The Fibonacci sequence is a classic example of recursion:Fib(0) = 01, case base asFib(1) = 12, case base asintegers all Forn > 1, Fib (n) := Fib(n− 1) + Fib(n− 2).

Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbersby the Peano axioms can be described as: 0 is a natural number, and each natural number has a successor, which isalso a natural number. By this base case and recursive rule, one can generate the set of all natural numbers.Recursively defined mathematical objects include functions, sets, and especially fractals.There are various more tongue-in-cheek “definitions” of recursion; see recursive humor.

34

Page 40: Self Reference

12.2. INFORMAL DEFINITION 35

12.2 Informal definition

Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking theprocedure itself. A procedure that goes through recursion is said to be 'recursive'.To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. Aprocedure is a set of steps based on a set of rules. The running of a procedure involves actually following the rules andperforming the steps. An analogy: a procedure is like a written recipe; running a procedure is like actually preparingthe meal.Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution ofsome other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that inturn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps callsfor a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from thelast time the same recipe was made. This of course immediately creates the possibility of an endless loop; recursioncan only be properly used in a definition if the step in question is skipped in certain cases so that the procedurecan complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made itbefore. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishingthe new from the old (partially executed) invocation of the procedure; this requires some administration of how farvarious simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rarein everyday situations. An example could be the following procedure to find a way through a maze. Proceed forwarduntil reaching either an exit or a branching point (a dead end is considered a branching point with 0 branches). Ifthe point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively; if everytrial fails by reaching only dead ends, return on the path that led to this branching point and report failure. Whetherthis actually defines a terminating procedure depends on the nature of the maze: it must not allow loops. In anycase, executing the procedure requires carefully recording all currently explored branching points, and which of theirbranches have already been exhaustively tried.

12.3 In language

Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of gram-matical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practicalconstraints such as the time available to utter one), can be explained as the consequence of recursion in naturallanguage.[1][2] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. Asentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are danger-ous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively(very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence.This is really just a special case of the mathematical definition of recursion.This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects thatTin Man said that.... Of course, there are many structures apart from sentences that can be defined recursively,and therefore many ways in which a sentence can embed instances of one category inside another. Over the years,languages in general have proved amenable to this kind of analysis.Recently, however, the generally-accepted idea that recursion is an essential property of human language has beenchallenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetskyand Cilene Rodrigues are among many who that have argued against this.[3] Literary self-reference can in any casebe argued to be different in kind from mathematical or logical recursion.[4]

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example,can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for nounphrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditran-sitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that itcan take any of these different types of meanings as arguments. This can be done by defining it for a simple case inwhich it combines sentences, and then defining the other cases recursively in terms of the simple one.[5]

Page 41: Self Reference

36 CHAPTER 12. RECURSION

12.3.1 Recursive humor

Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks,generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer toa base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in theirglossary along the lines of:

Recursion, see Recursion.[6]

A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book TheC Programming Language; the index entry recursively references itself (“recursion 86, 139, 141, 182, 202, 269”).The earliest version of this joke was in “Software Tools” by Kernighan and Plauger, and also appears in “The UNIXProgramming Environment” by Kernighan and Pike. It did not appear in the first edition of The C ProgrammingLanguage.Another joke is that “To understand recursion, you must understand recursion.”[6] In the English-language version ofthe Google web search engine, when a search for “recursion” is made, the site suggests “Did you mean: recursion.”An alternative form is the following, from Andrew Plotkin: “If you already know what recursion is, just remember theanswer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her whatrecursion is.”

Recursive acronyms can also be examples of recursive humor. PHP, for example, stands for “PHP Hypertext Pre-processor”, WINE stands for “Wine Is Not an Emulator.” and GNU stands for “GNU’s not Unix”.

12.4 In mathematics

12.4.1 Recursively defined sets

Main article: Recursive definition

Example: the natural numbers

See also: Closure (mathematics)

The canonical example of a recursively defined set is given by the natural numbers:

0 is in Nif n is in N , then n + 1 is in NThe set of natural numbers is the smallest set satisfying the previous two properties.

Example: The set of true reachable propositions

Another interesting example is the set of all “true reachable” propositions in an axiomatic system.

• If a proposition is an axiom, it is a true reachable proposition.

• If a proposition can be obtained from true reachable propositions by means of inference rules, it is a truereachable proposition.

• The set of true reachable propositions is the smallest set of propositions satisfying these conditions.

This set is called 'true reachable propositions’ because in non-constructive approaches to the foundations of mathe-matics, the set of true propositions may be larger than the set recursively constructed from the axioms and rules ofinference. See also Gödel’s incompleteness theorems.

Page 42: Self Reference

12.5. IN COMPUTER SCIENCE 37

12.4.2 Finite subdivision rules

Main article: Finite subdivision rule

Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdi-vision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdividedinto smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can beiterated. The standard `middle thirds’ technique for creating the Cantor set is a subdivision rule, as is barycentricsubdivision.

12.4.3 Functional recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) =F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this caseF(0) = 0 and F(1) = 1.A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily beexpressed without recursion.

12.4.4 Proofs involving recursive definitions

Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sec-tions, yields structural induction, a powerful generalization of mathematical induction widely used to derive proofsin mathematical logic and computer science.

12.4.5 Recursive optimization

Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problemin recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of theoptimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).

12.5 In computer science

Main article: Recursion (computer science)

A common method of simplification is to divide a problem into subproblems of the same type. As a computerprogramming technique, this is called divide and conquer and is key to the design of many important algorithms.Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smallerand smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach,where problems are solved by solving larger and larger instances, until the desired size is reached.A classic example of recursion is the definition of the factorial function, given here in C code:unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } }

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursivecall by n, until reaching the base case, analogously to the mathematical definition of factorial.Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smallerversions of itself. The solution to the problem is then devised by combining the solutions obtained from the simplerversions of the problem. One example application of recursion is in parsers for programming languages. The greatadvantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed orproduced by a finite computer program.Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrencerelation can be “solved” to obtain a non-recursive definition.

Page 43: Self Reference

38 CHAPTER 12. RECURSION

Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually simplicity.The main disadvantage is often that the algorithm may require large amounts of memory if the depth of the recursionis very large.

12.6 In art

The Russian Doll or Matryoshka Doll is a physical artistic example of the recursive concept.

12.7 The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a ofX and a function f : X → X , the theorem states that there is a unique function F : N → X (where N denotes theset of natural numbers including zero) such that

F (0) = a

F (n+ 1) = f(F (n))

for any natural number n.

12.7.1 Proof of uniqueness

Take two functions F : N → X and G : N → X such that:

F (0) = a

G(0) = a

F (n+ 1) = f(F (n))

G(n+ 1) = f(G(n))

where a is an element of X.It can be proved by mathematical induction that F (n) = G(n) for all natural numbers n:

Base Case: F (0) = a = G(0) so the equality holds for n = 0 .

Inductive Step: Suppose F (k) = G(k) for some k ∈ N . Then F (k + 1) = f(F (k)) = f(G(k)) =G(k + 1).

Hence F(k) = G(k) implies F(k+1) = G(k+1).

By induction, F (n) = G(n) for all n ∈ N .

12.7.2 Examples

Some common recurrence relations are:

• Golden Ratio: ϕ = 1 + (1/ϕ) = 1 + (1/(1 + (1/(1 + 1/...))))

• Factorial: n! = n(n− 1)! = n(n− 1) · · · 1

• Fibonacci numbers: f(n) = f(n− 1) + f(n− 2)

Page 44: Self Reference

12.8. SEE ALSO 39

• Catalan numbers: C0 = 1 , Cn+1 = (4n+ 2)Cn/(n+ 2)

• Computing compound interest

• The Tower of Hanoi

• Ackermann function

12.8 See also• Corecursion

• Course-of-values recursion

• Digital infinity

• Fixed point combinator

• Infinite loop

• Infinitism

• Iterated function

• Mise en abyme

• Reentrant (subroutine)

• Self-reference

• Strange loop

• Tail recursion

• Tupper’s self-referential formula

• Turtles all the way down

12.9 Bibliography• Dijkstra, EdsgerW. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

• Johnsonbaugh, Richard (2004). Discrete Mathematics. Prentice Hall. ISBN 0-13-117686-2.

• Hofstadter, Douglas (1999). Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books. ISBN 0-465-02656-7.

• Shoenfield, Joseph R. (2000). Recursion Theory. A K Peters Ltd. ISBN 1-56881-149-7.

• Causey, Robert L. (2001). Logic, Sets, and Recursion. Jones & Bartlett. ISBN 0-7637-1695-2.

• Cori, Rene; Lascar, Daniel; Pelletier, Donald H. (2001). Recursion Theory, Gödel’s Theorems, Set Theory,Model Theory. Oxford University Press. ISBN 0-19-850050-5.

• Barwise, Jon; Moss, Lawrence S. (1996). Vicious Circles. Stanford Univ Center for the Study of Languageand Information. ISBN 0-19-850050-5. - offers a treatment of corecursion.

• Rosen, Kenneth H. (2002). Discrete Mathematics and Its Applications. McGraw-Hill College. ISBN 0-07-293033-0.

• Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, Clifford Stein (2001). Introduction to Algorithms.Mit Pr. ISBN 0-262-03293-7.

Page 45: Self Reference

40 CHAPTER 12. RECURSION

• Kernighan, B.; Ritchie, D. (1988). The C programming Language. Prentice Hall. ISBN 0-13-110362-8.

• Stokey, Nancy,; Robert Lucas; Edward Prescott (1989). Recursive Methods in Economic Dynamics. HarvardUniversity Press. ISBN 0-674-75096-9.

• Hungerford (1980). Algebra. Springer. ISBN 978-0-387-90518-1., first chapter on set theory.

12.10 References[1] Pinker, Steven (1994). The Language Instinct. William Morrow.

[2] Pinker, Steven; Jackendoff, Ray (2005). “The faculty of language: What’s so special about it?". Cognition 95 (2): 201–236.doi:10.1016/j.cognition.2004.08.004. PMID 15694646.

[3] Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (2009). “Evidence and argumentation: A reply to Everett (2009)"(PDF). Language 85 (3): 671–681. doi:10.1353/lan.0.0140.

[4] Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & BusinessMedia. p. 110. ISBN 978-0-8176-4768-1.

[5] Barbara Partee and Mats Rooth. 1983. In Rainer Bäuerle et al., Meaning, Use, and Interpretation of Language. Reprintedin Paul Portner and Barbara Partee, eds. 2002. Formal Semantics: The Essential Readings. Blackwell.

[6] Hunter, David (2011). Essentials of Discrete Mathematics. Jones and Bartlett. p. 494.

12.11 External links• Recursion - tutorial by Alan Gauld

• A Primer on Recursion- contains pointers to recursion in Formal Languages, Linguistics, Math and ComputerScience

• Zip Files All The Way Down

• Nevins, Andrew and David Pesetsky and Cilene Rodrigues. Evidence and Argumentation: A Reply to Everett(2009). Language 85.3: 671-−681 (2009)

Page 46: Self Reference

12.11. EXTERNAL LINKS 41

A visual form of recursion known as theDroste effect. The woman in this image holds an object that contains a smaller image of herholding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth. Advertisementfor Droste cocoa, c. 1900

Page 47: Self Reference

42 CHAPTER 12. RECURSION

Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail.

Page 48: Self Reference

12.11. EXTERNAL LINKS 43

Recently refreshed sourdough, bubbling through fermentation: the recipe calls for some sourdough left over from the last time thesame recipe was made.

Page 49: Self Reference

44 CHAPTER 12. RECURSION

The Sierpinski triangle—a confined recursion of triangles that form a fractal

Page 50: Self Reference

Chapter 13

Recursive acronym

A recursive acronym is an acronym that refers to itself in the expression for which it stands. The term was first usedin print in 1979 in Douglas Hofstadter's book Gödel, Escher, Bach: An Eternal Golden Braid, in which Hofstadterinvents the acronym GOD, meaning “GOD Over Djinn”, to help explain infinite series, and describes it as a recursiveacronym,[1] Other references followed.[2] however the concept was used as early as 1968 in John Brunner's sciencefiction novel Stand On Zanzibar. In the story, the acronym EPT (Education for Particular Task) later morphed into“Eptification for Particular Task”.

13.1 Computer-related examples

In computing, an early tradition in the hacker community (especially at MIT) was to choose acronyms and abbrevi-ations that referred humorously to themselves or to other abbreviations. Perhaps the earliest example in this context,from about 1977 or 1978, is TINT (“TINT Is Not TECO"), an editor for MagicSix written (and named) by TedAnderson. This inspired the two MIT Lisp Machine editors called EINE (“EINE Is Not Emacs") and ZWEI (“ZWEIWas EINE Initially”). These were followed by Richard Stallman's GNU (GNU’s Not Unix). Many others also includenegatives, such as denials that the thing defined is or resembles something else (which the thing defined does in factresemble or is even derived from), to indicate that, despite the similarities, it was distinct from the program on whichit was based.[3]

An earlier example appears in a 1976 textbook on data structures, in which the pseudo-language SPARKS is usedto define the algorithms discussed in the text. “SPARKS” is claimed to be a non-acronymic name, but “several cuteideas have been suggested” as expansions of the name. One of the suggestions is “Smart Programmers Are Requiredto Know SPARKS”.[4]

13.1.1 Notable examples

• Ace — Ace Code Editor

• Allegro — Allegro Low LEvel Game ROutines (early versions for Atari ST were called “Atari Low LEvelGame ROutines”)

• ANX— ANX is Not XNA

• AROS — AROS Research Operating System (originally Amiga Research Operating System)

• BAMF— BAMF Application Matching Framework

• BOSH — Bosh Outer SHell

• CAVE — CAVE Automatic Virtual Environment

• cURL — Curl URL Request Library[5]

• EINE — EINE Is Not Emacs

45

Page 51: Self Reference

46 CHAPTER 13. RECURSIVE ACRONYM

• FARM— Farm Animal Rights Movement

• FIJI — FIJI Is Just ImageJ

• FYBMEM— FYBMEM Your Basic Monitor Editor Mechanism

• GiNaC — GiNaC is Not a CAS (Computer Algebra System)

• GNU— GNU’s Not Unix

• GPE – GPE Palmtop Environment

• gRPC – grpc Remote Procedure Calls

• HIME — HIME Input Method Editor[6]

• INX — INX’s Not X (a UNIX clone)

• JACK — JACK Audio Connection Kit

• KGS — KGS Go Server

• LAME— LAME Ain't an MP3 Encoder[7]

• LISA – LISA: Invented Stupid Acronym[8]

• LiVES — LiVES is a Video Editing System

• MEGA—MEGA Encrypted Global Access[9]

• MINT —MINT Is Not TRAC

• MiNT —MiNT is Not TOS (later changed to “MiNT is Now TOS”)

• Mung — Mung Until No Good[10]

• Nano — Nano’s ANOther editor

• Nagios — Nagios Ain't Gonna Insist On Sainthood (a reference to the previous name of Nagios, “Netsaint";agios [αγιος] is the Greek word for “saint”)

• NiL — NiL Isn't Liero

• Ninja-ide – Ninja-IDE Is Not Just Another IDE

• NITE — NITE Isn't TECO Either (the 2nd offering from the creator of TINT)

• pacc – pacc: a compiler-compiler[11]

• PHP — PHP: Hypertext Preprocessor (originally “Personal Home Page Tools”[12])

• PINE — PINE Is Nearly Elm, originally; PINE now officially stands for “Pine Internet News and E-mail”[13]

• PIP — PIP Installs Packages

• P.I.P.S. — P.I.P.S. Is POSIX on Symbian

• Qins — Qins is not Slow[14]

• RPM— RPM Package Manager (originally "Red Hat Package Manager”)

• SPARQL — SPARQL Protocol And RDF Query Language

• TikZ – TikZ ist kein Zeichenprogramm (German; TikZ is no drawing program)

• TIARA— TIARA is a recursive acronym[15]

• TiLP — TiLP is a Linking Program

• TIP — TIP isn't Pico

Page 52: Self Reference

13.2. ORGANIZATIONS 47

• TRESOR – TRESOR Runs Encryption Securely Outside RAM

• UIRA — UIRA Isn't a Recursive Acronym

• WINE —WINE Is Not an Emulator[16] (initiallyWindows Emulator)

• XBMC— XBMC Media Center (originally Xbox Media Center)

• XINU— Xinu Is Not Unix

• XNA— XNA’s Not Acronymed

• YAML— YAML Ain't Markup Language (initially “Yet Another Markup Language”)

• Zinf — Zinf Is Not Freeamp

• ZWEI — ZWEI Was EINE Initially (“eins” and “zwei” are German for “one” and “two” respectively)

Mutually recursive or otherwise special

• The GNU Hurd project is named with a mutually recursive acronym: “Hurd” stands for “Hird of Unix-Replacing Daemons", and “Hird” stands for “Hurd of Interfaces Representing Depth.”

• RPM, PHP, XBMC and YAML were originally conventional acronyms which were later redefined recursively.They are examples of, or may be referred to as, backronymization, where the official meaning of an acronymis changed.

• Jini claims the distinction of being the first recursive anti-acronym: 'Jini Is Not Initials’.[17][18] It might, however,be more properly termed an anti-backronym because the term “Jini” never stood for anything in the first place.The more recent "XNA", on the other hand, was deliberately designed that way.

• Most recursive acronyms are recursive on the first letter, which is therefore an arbitrary choice, often selectedfor reasons of humour, ease of pronunciation, or consistency with an earlier acronym that used the same lettersfor different words, such as PHP: PHP Hypertext Preprocessor, which was originally “Personal home page”.However YOPY, “Your own personal YOPY” is recursive on the last letter (hence the last letter of the acronymhad to be the same as the first).

13.2 Organizations

Some organizations have been named or renamed in this way:

• BWIA — BWIA West Indies Airways (formerly British West Indian Airways)

• FALE — Fale Association of Locksport Enthusiasts[19][20]

• GES — GES Exposition Services (formerly Greyhound Exposition Services)

• hEART — hEART the European Association for Research in Transportation

• LINK — Link Interchange NetworK, the UK ATM switching organization.

• Heil — Heil Environmental Industries Limited, maker of garbage trucks

13.3 See also• Acronyms

• Anti-acronym

• Backronyms

• RAS syndrome (Redundant Acronym Syndrome syndrome)

• Self-reference

Page 53: Self Reference

48 CHAPTER 13. RECURSIVE ACRONYM

13.4 References

Notes

[1] “Puzzles and Paradoxes: Infinity in Finite Terms”. Retrieved 2013-04-23.

[2] “WordSpy – Recursive Acronym”. Retrieved 2008-12-18.

[3] The Free Software Movement and the Future of Freedom: The name “GNU”, Richard Stallman, March 9th 2006

[4] Fundamentals Of Data Structures (Ellis Horowitz & Sartaj Sahni, Computer Science Press, 1976)

[5] Stenberg, Daniel (20 March 2015). “curl, 17 years old today”. daniel.haxx.se. Retrieved 20 March 2015.

[6] “HIME Input Method Editor”. Retrieved 2012-06-15.

[7] “LAME Ain't an MP3 Encoder”. Retrieved 2006-11-15.

[8] Isaacson, Walter Steve Jobs New York: Simon and Schuster, 2011. Apple’s Lisa computer was actually named after SteveJobs' daughter.

[9] “MEGA”. Retrieved 19 January 2013.

[10] “The Jargon File: Mung”. Retrieved 2007-10-15.

[11] “pacc: a compiler-compiler”. Retrieved 2012-05-14.

[12] “History of PHP”. php.net.

[13] “What Pine Really Stands For”. Retrieved 2007-03-06.

[14] QINS website

[15] .EXE magazine, November 1996

[16] “FAQ – The Official Wine Wiki”. Retrieved 2009-01-16.

[17] FAQ for JINI-USERS Mailing List, Retrieved 18 November 2013

[18] Introduction to The Jini Specification, Arnold et al, Pearson, 1999, ISBN 0201616343

[19] “FALE Association of Locksport Enthusiasts”. Retrieved 2014-02-12.

[20] FALE Association of Locksport Enthusiasts. Retrieved 2014-02-12.

Sources

• This article is based in part on the Jargon File, which is in the public domain.

13.5 External links• The dictionary definition of recursive acronym at Wiktionary

Page 54: Self Reference

Chapter 14

Self-reference

Self-reference occurs in natural or formal languages when a sentence, idea or formula refers to itself. The referencemay be expressed either directly—through some intermediate sentence or formula—or by means of some encoding.In philosophy, it also refers to the ability of a subject to speak of or refer to himself, herself, or itself: to have thekind of thought expressed by the first person nominative singular pronoun, the word “I” in English.Self-reference is studied and has applications in mathematics, philosophy, computer programming, and linguistics.Self-referential statements are sometimes paradoxical.

14.1 Usage

An example of a self-referential situation is the one of self-creation, as the logical organization produces itself thephysical structure which creates itself.Self-reference also occurs in literature and film when an author refers to his or her own work in the context of thework itself. Famous examples include Cervantes's Don Quixote, Shakespeare's A Midsummer Night’s Dream, DenisDiderot's Jacques le fataliste et son maître, Italo Calvino's If on a winter’s night a traveler, many stories by NikolaiGogol, Lost in the Funhouse by John Barth, Luigi Pirandello's Six Characters in Search of an Author, Douglas Adams'The Hitchhiker’s Guide to the Galaxy series of books, and Federico Fellini's 8½. This is closely related to the conceptsof breaking the fourth wall and meta-reference, which often involve self-reference.

The surrealist painter René Magritte is famous for his self-referential works. His painting The Treachery of Images,includes the words this is not a pipe, the truth of which depends entirely on whether the word “ceci” (in English, “this”)refers to the pipe depicted—or to the painting or the word or sentence itself.[2]

In computer science, self-reference occurs in reflection, where a program can read or modify its own instructionslike any other data.[3] Numerous programming languages support reflection to some extent with varying degrees ofexpressiveness. Additionally, self-reference is seen in recursion (related to the mathematical recurrence relation),where a code structure refers back to itself during computation.[4]

14.2 Examples

14.2.1 In language

See also: Appendix:Autological words

A word that describes itself is called an autological word (or autonym). This generally applies to adjectives, forexample sesquipedalian (i.e. “sesquipedalian” is a sesquipedalian word), but can also apply to other parts of speech,such as TLA, as a three-letter abbreviation for "three-letter abbreviation", and PHP which is a recursive acronym for“PHP: Hypertext Preprocessor”.[5]

A sentence which inventories its own letters and punctuation marks is called an autogram.

49

Page 55: Self Reference

50 CHAPTER 14. SELF-REFERENCE

The Ouroboros, a dragon that continually consumes itself, is used as a symbol for self-reference.[1]

• There is a special case of meta-sentence in which the content of the sentence in the metalanguage and thecontent of the sentence in the object language are the same. Such a sentence is referring to itself. Howeversome meta-sentences of this type can lead to paradoxes. “This is a sentence.” can be considered to be a self-referential meta-sentence which is obviously true. However “This sentence is false” is a meta-sentence whichleads to a self-referential paradox.

Hofstadter’s law, which specifies that “It always takes longer than you expect, even when you take into accountHofstadter’s Law”[6] is an example of a self-referencing adage.Fumblerules are a list of rules of good grammar and writing, demonstrated through sentences that violate those veryrules, such as “Avoid cliches like the plague” and “Don't use no double negatives”. The term was coined in a publishedlist of such rules by William Safire.[7][8]

14.2.2 In mathematics

• Gödel sentence

Page 56: Self Reference

14.3. SEE ALSO 51

• Impredicativity

• Loop (graph theory)

• Tupper’s self-referential formula

14.2.3 In literature, film, and popular culture

Main article: Metafiction

• The subgenre of "recursive science fiction" is now so extensive that it has fostered a fan-maintained bibliographyat the New England Science Fiction Association's website; some of it is about science fiction fandom, someabout science fiction and its authors.[9]

14.3 See also

14.4 References[1] Soto-Andrade, Jorge; Jaramillo, Sebastian; Gutierrez, Claudio; Letelier, Juan-Carlos. “Ouroboros avatars: A mathematical

exploration of Self-reference and Metabolic Closure” (PDF). MIT Press. Retrieved 16 May 2015.

[2] Nöth, Winfried; Bishara, Nina (2007). Self-reference in the Media. Walter de Gruyter. p. 75. ISBN 978-3-11-019464-7.

[3] Malenfant, J.; Demers, F-N. “A Tutorial on Behavioral Reflection and its Implementation” (PDF). PARC. Retrieved 17May 2015.

[4] Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & BusinessMedia. p. 110. ISBN 978-0-8176-4768-1.

[5] PHP Manual: Preface, www.php.net

[6] Hofstadter, Douglas. Gödel, Escher, Bach: An Eternal Golden Braid. 20th anniversary ed., 1999, p. 152. ISBN 0-465-02656-7

[7] alt.usage.english.org’s Humorous Rules for Writing

[8] Safire, William (1979-11-04). “On Language; The Fumblerules of Grammar”. New York Times. p. SM4.

[9] “Recursive Science Fiction” New England Science Fiction Association website, last updated 3 August 2008

14.5 Sources• Hofstadter, D. R. (1980). Gödel, Escher, Bach: an Eternal Golden Braid. New York, Vintage Books.

• Smullyan, Raymond (1994), Diagonalization and Self-Reference, Oxford Science Publications, ISBN 0-19-853450-7

Page 57: Self Reference

Chapter 15

Self-referential encoding

Every day, people are presented with endless amounts of information, and in an effort to help keep track and organizethis information, people must be able to recognize, differentiate and store information. One way to do that is toorganize information as it pertains to the self.[1] The overall concept of self-reference suggests that people interpretincoming information in relation to themselves, using their self-concept as a background for new information.[1]Examples include being able to attribute personality traits to oneself or to identify recollected episodes as beingpersonal memories of the past.[2] The implications of self-referential processing are evident in many psychologicalphenomena. For example, the "cocktail party effect" notes that people attend to the sound of their names even duringother conversation or more prominent, distracting noise. Also, people tend to evaluate things related to themselvesmore positively (This is thought to be an aspect of implicit self-esteem). For example, people tend to prefer their owninitials over other letters.[3] The self-reference effect (SRE) has received the most attention through investigations intomemory. The concepts of self-referential encoding and the SRE rely on the notion that relating information to theself during the process of encoding it in memory facilitates recall, hence the effect of self-reference on memory. Inessence, researchers have investigated the potential mnemonic properties of self-reference.[4]

Research includes investigations into self-schema, self-concept and self-awareness as providing the foundation forself-reference’s role in memory. Multiple explanations for the self-reference effect in memory exist, leading to adebate about the underlying processes involved in the self-reference effect. In addition, through the exploration of theself-reference effect, other psychological concepts have been discovered or supported, including simulation theoryand the group reference effect. After researchers developed a concrete understanding of the self-reference effect,many expanded their investigations to consider the self-reference effect in particular groups like those with autismspectrum disorders or those experiencing depression.

15.1 Self-concept and self-schema

Self-knowledge can be categorized by structures in memory or schemata. A self-schema is a set of facts or beliefs thatone has about themselves.[5] For any given trait, an individual may or may not be “schematic"; that is, the individualmay or may not not think about themselves as to where they stand on that trait. For example, people who thinkof themselves as very overweight or who identify themselves to a greater extent based on their body weight wouldbe considered “schematic” on the attribute of body weight. Thus, many every day events, such as going out for ameal or discussing a friend’s eating habits could induce thoughts about the self.[6] When people relate information tosomething that has to do with the self, it facilitates memory.[7] Self-descriptive adjectives that fit into one’s self-schemaare easier to remember than adjectives not viewed as related to the self. Thus, the self-schema is an aspect of oneselfthat is used as an encoding structure that brings upon memory of information consistent with one’s self-schema.[8]Memories that are elaborate and well encoded are usually the result of self-referent correlations during the process ofremembering. During the process of encoding, trait representations are encoded in long term memory either directlyor indirectly. When they are directly encoded, it is in terms of relating to the self, and when it is indirectly encodedit is done through spouts of episodic information instead of information about the self.[5]

Self-schema is often used as somewhat of a database for encoding personal data.[9] The self-schema is also usedby paying selective attention to outside information and internalizing that information more deeply in one’s memorydepending on how much that information relates to their schema.[10] When self-schema is engaged, traits that goalong with one’s view of themselves are better remembered and recalled. These traits are also often recalled much

52

Page 58: Self Reference

15.1. SELF-CONCEPT AND SELF-SCHEMA 53

better when processed with respect to the self. Similarly, items that are encoded with the self are based on one’sself-schema.[4] Processing the information should balance out when recalled for individuals who have a self-schemathat goes along with the information.[8]

Self-schemas do not necessarily only involve individual traits. People self-categorize at different levels that rangefrom more personal to more social. Self-schemas have three main categories which play a role: the personal self, therelational self, and the collective self. The personal self deals with individual level characteristics, the relational selfdeals with intimate relationship partners, and the collective self deals with group identities, relating to self-importantsocial groups to which one belongs (e.g., one’s family or university).[11] Information that is related to any type ofself-schema, including group-related knowledge structures facilitates memory.In order for the self to be an effective encodingmechanism, it must be a uniform, consistent, well-developed schema. Ithas been shown that identity exploration leads to the development of self-knowledge which facilitates self-judgments.Identity exploration led to shorter decision times, higher confidence ratings and more intrusions in memory tasks.[12]Previous researchers hypothesized that words compatible with a person’s self-schema are easily accessible in memoryand are more likely than incompatible words to intrude on a schema-irrelevant memory task. In one experiment, whenparticipants were asked to decide if certain adjectives were “like me” or “not like me,” they made the decisions fasterwhen the words were compatible with their self-schema.[13]

However, despite the existence of the self-reference effect when considering schemata consistent adjectives, the con-nection between the self and memory can lead to a larger number of mistakes in recognition, commonly referred to asfalse alarms. Rogers et al. (1979) found that people are more likely to falsely recognize adjectives they had previouslydesignated to be self-descriptive.[12] Expanding on this, Strube et al. (1986) found that false alarms occurred morefor self-schema consistent content, presumably because the presence of such words in the schema makes them moreaccessible in memory.[13]

In addition to investigating the self-reference effect in regards to schemata consistent information, Strube et al. dis-cussed how counter schemata information relates to this framework. They noted that the pattern of making correctdecisions more rapidly did not hold when considering words that countered a person’s self-schema, presumably be-cause they were difficult to integrate into memory due to lack of a preexisting structure.[13] That is, they lacked theorganizational structure of encoding because they did not fall into the “like me” category, and elaboration would notwork because prior connections to the adjective did not exist.

15.1.1 Self-awareness and personality

Two of the most common functions of the self receiving significant attention in research are the self-acting to organizethe individual’s understanding of the social environment, and the self functioning to regulate behavior through self-evaluation.[14] The concept of self-awareness is considered to be the foundational principle for both functions of theself. Some research presents self-awareness in terms of self-focused attention [15] whereas Hull and Levy suggestthat self-awareness refers to the encoding of information based on its relevance to the self.[14] Based on the latterinterpretation of self-awareness, individuals must identify the aspects of situations that are relevant to themselves andtheir behavior will be shaped accordingly.[14] Hull and Levy suggest that self-awareness corresponds to the encodingof information cued by self-symbolic stimuli, and examine the idea of self-awareness as a method of encoding.They structured an investigation that examined self-referent encoding in individuals with different levels of self-awareness, predicting that individuals with higher levels of self-consciousness would encode self-relevant informationmore deeply than other information, and that they would encode it more deeply than individuals with low levels ofself-consciousness.[14] The results of their investigation supported their hypothesis that self-focused attention is notenough to explain the role of self-awareness on attribution. Their results suggest that self-awareness leads to increasedsensitivity to the situationally defined meanings of behavior, and therefore organizes the individual’s understandingof the social environment.[14] The research presented by Hull and Levy led to future research on the encoding ofinformation associated with self-awareness.In later research, Hull and colleagues examined the associations between self-referential encoding, self-consciousnessand the extent to which a stimulus is consistent with self-knowledge. They first assumed that the encoding of astimulus is facilitated if an individual’s working memory already contains information consistent with the stimulus,and suggested that self-consciousness as an encoding mechanism relies on an individual’s self-knowledge.[16] It isknown that situational and dispositional factors may activate certain pools of knowledge, moving them into workingmemory, and guiding the processing of certain stimulus information.[16]

In order to better understand the idea of activating information in memory, Hull et al. presented an example ofhow information is activated. They referred to the sentence “The robber took the money from the bank”.[17] In

Page 59: Self Reference

54 CHAPTER 15. SELF-REFERENTIAL ENCODING

English, the word bank has two applicable meanings in the context of this sentence (monetary institution and rivershore). However, the monetary institution meaning of the word is more highly activated in this context due to theaddition of the words robber and money to the sentence, because they are associatively relevant and therefore pullthe monetary institution definition for bank into working memory. Once information is added to working mem-ory, meanings and associations are more easily drawn. Therefore, the meaning of this example sentence is almostuniversally understood.[16]

In reference to self-consciousness and self-reference, the connection between self-consciousness and self-referentencoding relies on such information activation. Research suggests that self-consciousness activates knowledge relatingto the self, thereby guiding the processing of self-relevant information.[16] Three experiments conducted by Hulland colleagues provided evidence that a manipulation of accessible self-knowledge impacts self-referent encodingbased on the self-relevance of such information, individual differences in the accessibility of self-knowledge (self-consciousness) impacts perception, and a mediation relationship exists between self-consciousness and individualdifferences in self-referential encoding.[16]

Similar to how self-awareness impacts the availability of self-knowledge and the encoding of self-relevant informa-tion, through the development of the self-schema, people develop and maintain certain personality characteristicsleading to a variety of behavior patterns. Research has been done on the differences between Type A and Type B be-havior patterns, focusing on how people in each group respond to environmental information and their interpretationof the performance of others and themselves. It has been found that Type A behavior is characterized by compet-itive achievement striving, time urgency and hostility, whereas Type B is usually defined as an absence of Type Acharacteristics. When investigating causal attributions for hypothetical positive and negative outcomes, Strube et al.found that Type A individuals were more self-serving, in that they took greater responsibility for positive than nega-tive effects. Strube and colleagues argued that this could be a result of the fact that schema-consistent information ismore easily remembered and the ease with which past successes and failures are recalled, determined by self-schema,would impact attributions. It is reasonable to believe that Type A’s might recall successes more easily and hence bemore self-serving.[13]

15.2 Theoretical background

Influential psychologists Craik and Lockhart laid the groundwork for research focused on self-referential encodingand memory. In 1972 they proposed their Depth of Processing framework which suggests that memory retentiondepends on how the stimulus material was encoded in memory.[18][19] Their original research considered structural,phonemic, and semantic encoding tasks, and showed that semantic encoding is the best method to aid in recall. Theyasked participants to rate 40 descriptive adjectives on one of four tasks; Structural (Big font or small font?), Phonemic(Rhymes with xxx?), Semantic (Means same as xxx?), or Self-reference (Describes you?). This was then followed byan “incidental recall task”. This is where participants are asked, without prior warning, to recall as many of the wordsthey had seen as possible within a given time limit. Craik and Tulving’s original experiment showed that structuraland phonemic tasks lead only to “shallow” encoding, while the semantic tasks lead to “deep” encoding and resultedin better recall.[20]

However, in 1977, it was shown that self-relevant or self-descriptive encoding leads to even better recall than semantictasks.[19] Experts suggest that the call on associativememory required by semantic tasks is what provides the advantageover structural or phonemic tasks, but is not enough to surpass the benefit provided by self-referential encoding.[1] Thefact that self-reference was shown to be a stronger memory encoding method than semantic tasks is what led to moresignificant interest in the field [4] One early and significant experiment aimed to place self-reference on Craik andLockhart’s depth of processing hierarchy, and suggested that self-reference was a more beneficial encoding methodthan semantic tasks. In this experiment, participants filled out self-ratings on 84 adjectives. Months later, theseparticipants were revisited and were randomly shown 42 of those words. They then had to select the group of 42“revisited” words out of the total original list. The researchers argued that if the “self” was involved in memoryretrieval, participants would incorrectly recognize words that were more self-descriptive [1] In another experiment,subjects answered yes or no to cue questions about 40 adjective in 4 tasks (structural, phonemic, semantic and self-referential) and later had to recall the adjectives. This experiment validated the strength of self-reference as anencoding method, and indicated it developed a stronger memory trace than the semantic task.[1]

Researchers are implementing a new strategy by developing different encoding tasks that enhance memory verysimilarly to self-referential encoding.[11] Symons (1990) had findings that went against the norm when he was unableto find evidence of self-schematicity in the self-reference effect.[11] Another finding was that when referencing genderand religion, there was a low memory recall when compared with referencing the self. A meta-analysis by Symons

Page 60: Self Reference

15.3. TYPES OF SELF-REFERENTIAL ENCODING TASKS 55

and Johnson (1997) showed self-reference resulting in better memory in comparison to tasks relying on semanticencoding or other-referent encoding. According to Symons and Johnson, self-referencing questions elicit elaborationand organization in memory, both of which creating a deeper encoding and thus facilitate memory.[21]

Theorists that favor the view that the self has a special role believe that the self leads to more in depth processing,leading to easier recall during self-reference tasks.[4] Theorists also promote the self-schema as being one of the soleinhibitors that allow for recall from deep memory.[9] Thorndyke and Hayes-Roth had the goal of focusing on theprocess made by the active memory schemata.[22] Sex-typed individuals recall trait adjectives that go along with theirsex role more quickly than trait adjectives that are not. During the process of free recall, these individuals also showedmore patterns for gender clustering than other sexually typed individuals.[8]

15.3 Types of self-referential encoding tasks

As research on self-referential encoding became more prolific, some psychologists took an opportunity to delineatespecific self-referential encoding tasks. It is noted that descriptive tasks are those that require participants to determineif a stimulus word can be classified as “self-descriptive.” Autobiographical tasks are those that require participantsto use the stimulus word as a cue to recall an autobiographical memory. Results from experiments that differentiatedbetween these types of self-referential encoding found that they both produced better recall than semantic tasks, andneither was more advantageous than the other. However, research does suggest that the two types of self-referentialencoding do rely on different processes to facilitate memory.[4] In most experiments discussed, these types of self-referential encoding were not differentiated.In a typical self-reference task, adjectives are presented and classified as either self-descriptive or not.[4] For example,in a study by Dobson and Shaw, adjectives about the self that were preselected were given to the participants andthey decide whether or not the adjectives are self-descriptive.[10] The basis for making certain judgments, decisions,inferences and decisions is a self-referent encoding task. If two items are classified as self-descriptive there is noreason one trait would not be equally as easy to retrieve as the other on a self-reference task.[4]

15.4 Explanations for the self-reference effect

While a significant amount of research supports the existence of the self-reference effect, the processes behind it arenot well understood. However, multiple hypotheses have been introduced, and two main arguments have been devel-oped: the elaborative processing hypothesis and the organizational processing hypothesis.[23] Encodings in referenceto the self are so elaborate because of the information one has about the self.[11] Information encoded with the selfis better remembered than information encoded with reference to something else.[24]

15.4.1 Elaboration

Elaboration refers to the encoding of a single word by forming connections between it and other material alreadystored in memory.[19] By creating these connections between the stimulus word and other material already in memory,multiple routes for retrieval of the stimulus word are formed.[23] Based on the depth of processing framework, memoryretention increases as elaboration during encoding increases.[19] The Elaborative Processing Hypothesis would suggestthat any encoding task that leads to the development of the most trace elaboration or associations is the best formemory retention. Additional research on the depth of processing hierarchy suggests that self-reference is the superiormethod of information encoding. The elaborative hypothesis would suggest this is because self-reference creates themost elaborate trace,[23] due to the many links that can be made between the stimulus and information about the selfalready in memory.[19]

15.4.2 Organization

The organizational processing hypothesis was proposed by Klein and Kihlstrom.[19] This hypothesis suggests thatencoding is best prompted by considering stimulus words in relation to one another. This thought process and rela-tional thinking creates word to word associations.[23] These inter-item associations are paths in memory that can beused during retrieval. Also, the category labels that define the relations between stimulus items can be used as itemcues. Evidence of the organizational component of encoding is demonstrated through the clustering of words during

Page 61: Self Reference

56 CHAPTER 15. SELF-REFERENTIAL ENCODING

recall.[23] Word clustering during recall indicates that relational information was used to store the words in memory.Rogers, Kuiper and Kirker showed that self-referential judgments were more likely to encourage organization thansemantic ones.[1] Therefore, they suggested the self-reference effect was likely due to the organizational processingendured by self-referential encoding.[23]

Structural, phonemic and semantic tasks within the depth of processing paradigm require words to be consideredindividually, and lend themselves to an elaborative approach. As such, it can be argued that self-referential encodingis superior because it leads to an indirect division of words into categories: words that describe me versus words thatdo not.[19] Due to this connection between self-reference and organizational processing, further research has beendone on this area. Klein and Kihlstrom’s research suggests first that, like previous research, self-reference led tobetter recall than semantic and structural encoding. Second, they found that self-referentially encoded words weremore clustered in recall than words from other tasks, suggesting higher levels of organizational processing. From thisthey concluded that the organization, not encoding task, is what makes self-referential encoding superior [19]

15.4.3 Dual process

Psychologists Einstein and Hunt showed that both elaborative processing and organizational processing facilitate re-call. However, their research argues that the effectiveness of either approach depends on how related the stimuluswords are to one another. A list of highly related stimulus words would be better encoded using the elaborativemethod. The relations between the words would be evident to subjects; therefore, they would not gain any additionalpathways for retrieval by encoding the words based on their categorical membership. Instead, the other informationgained through elaborative processing would be more beneficial. On the other hand, a list of stimulus words withlittle relation would be better stored to memory through the organizational method. Since the words have no obviousconnection to one another, subjects would likely encode them individually, using an elaborative approach. Sincerelational information wouldn't be readily detected, focusing on it would add to memory by creating new traces forretrieval.[23][25] Superior recall was better explained by a combination of elaboration and organization.Ultimately, the exact processes behind self-referential encoding that makes it superior to other encoding tasks are stillunder debate. Research suggests that if elaborative processing is behind self-referential encoding, a self-referentialtask should have the same effect as an elaborative task, whereas if organizational processing underlies the self-reference effect self-referential encoding tasks should function like organizational tasks.[23] To test this, Klein andLoftus ran a 3x2 study testing organizational, elaborative and self-referential encoding with lists of 30 related orunrelated words. When participants were asked to memorize the unrelated list, recall and clustering were higherfor the organizational task, which produced almost equal results to the self-referential task, suggesting that has anorganizational basis. For the list of related words, the elaborative task led to better recall and had matched resultsto the self-reference task, suggesting an elaborative basis. This research, then, suggests that the self-reference effectcannot be explained by a single type of processing.[23] Instead, self-referential encoding must lead to information inmemory that incorporates item specific and relational information.[23]

Overall, the SRE relies on the unique mnemonic aspects of the self. Ultimately, if the research is suggesting thatthe self has superior elaborative or organizational properties, information related to the self should be more easilyremembered and recalled.[21] The research presented suggests that self-referential encoding is superior because itpromotes organization and elaboration simultaneously, and provides self-relevant categories that promote recall.[21]

15.5 Social brain science

The field of social brain science is aimed at examining the neural foundations of social behavior.[3] Neuroimagingand neuropsychology have led to the examination of neuroanatomy and its connection to psychological topics.[26]Through this research, neuropsychologists have found a connection between social cognitive functioning and themedial prefrontal cortex (mPFC). In addition, the mPFC has been connected to reflection and introspection aboutpersonal mental states.[26] Supporting these findings, it has been shown that damage to the mPFC is connected toimpairments with self-reflection, introspection and daydreaming, as well as social competence, but not other areas offunctioning.[3] As such, the mPFC has been connected to self-referential processing.[2]

The research discussed by those focusing on the neuroanatomy of self-referential processing included similar tasks tothe memory and depth of processing research discussed previously. When participants were asked to judge adjectivesbased in whether or not they were self-descriptive, it was noted that the more self-relevant the trait, the stronger theactivation of the mPFC. In addition, it was shown that the mPFC was activated during the appraisal of one’s own

Page 62: Self Reference

15.6. EXPANSION OF THE SRE: GROUP REFERENCE 57

personality traits, as well as during trait retrieval.[2] One study showed that the more activity in the mPFC during self-referential judgments, the more likely the word was to be remembered on a subsequent surprise memory test. Theseresults suggest that the mPFC is involved in both self-referential processing and in creating self-relevant memories.[3]

Medial prefrontal cortex (mPFC) activation during occurs during processing of self-relevant information.[27] Whenself-referent judgment is more relatable and less negative, the mFPC is activated. Finding support clear cut circuitsthat have high levels of activation when cognitive and emotional aspects of self-reflection are present.[27] The caudatenucleus has not been associated with self-reference before, however, Fossati and colleagues found activity whileparticipants were retrieving self-relevant trait adjectives.[28][27] The ventral anterior cingulate cortex (vACC) is alsoa part of the brain that becomes activated when there are signs of self-referencing and processing. The vACC isactivated when self-descriptive information is negative.[27] There is also pCC (posterior cingulate cortex) activityseen in neuroimaging studies during self-referential processing.[27]

15.5.1 Depth of processing or cognitive structure

Given all of the neurological support for the effect of self-reference on encoding and memory, there is still a debate inthe psychological community about whether or not the self-reference effect signifies a special functional role played bythe self in cognition. Generally, this question is met by people that have two opposing views on the processes behindself-reference. On one side of the debate, people believe that the self has special mnemonic abilities because it is aunique cognitive structure. On the other side, people support the arguments described above that suggest there is nospecial structure, but instead, the self-reference effect is simply a part of the standard depth of processing hierarchy.Since the overall hypothesis is the same for both sides of the debate, that self-relevant material leads to enhancedmemory, it is difficult to test them using strictly behavioral measures. Therefore, PET and fMRI scans have beenused to see the neural marker of self-referential mental activity.[3]

Previous studies have shown that areas of the left prefrontal cortex are activated during semantic encoding. Therefore,if the self-reference effect works the same way, as part of the depth of processing hierarchy, the same brain regionshould be activated when judging traits related to the self. However, if the self has unique mnemonic properties, thenself-referential tasks should activate brain regions distinct from those activated during semantic tasks.[3] The field isstill at is infancy, but future work on this hypothesis might help to settle the debate about the underlying processes ofself-referential encoding.

15.5.2 Simulation theory

While not able to completely settle the debate over the foundation of self-referential processing, studies on the neu-rological aspect of personality trait judgments did lead to a related, significant result. It has been shown that judgingpersonality traits about oneself and a close friend activated overlapping brain regions, and the activated regions haveall been implicated in self-reference. Noting the similarity between making self-judgments and judgments aboutclose others led to the introduction of the simulation theory of empathy. Simulation theory rests on the idea thatone can make inferences about others by using the knowledge they have about themselves.[2] In essence, the theorysuggests that people use self-reflection to understand or predict the mental state of others.[26] The more similar aperson perceives another to be, the more active the mPFC has shown to be, suggesting more deep or intricate self-reference.[2] However, this effect can cause people to make inaccurate judgments about others or to believe that theirown opinions are representative of others in general. This misrepresentation is referred to as the false-consensuseffect.[26]

15.6 Expansion of the SRE: group reference

In addition to simulation theory, other expansions of the self-reference effect have been examined. Through studyingthe self, researchers have found that the self consists of many independent cognitive representations. For example,the personal self composed of individual characteristics is separate from the relational self which is based on relation-ships with significant others. These two forms of self are again separate from the collective self which correspondsto a particular group identity.[29] Noting the existence of the collective self and the different group identities thatcombine to form such a self-representation led researchers to question if information stored in reference to a socialgroup identity has the same effects in memory as information stored in reference to the individual self. In essence,researchers questioned if the self-reference effect can be extended to include situations where the self is more socially

Page 63: Self Reference

58 CHAPTER 15. SELF-REFERENTIAL ENCODING

defined, producing a group-reference effect.[11]

Previous research supports the idea that the group-reference effect should exist from a theoretical standpoint. First,the self-expansion model argues that individuals incorporate characteristics of their significant others (or other in-group members [30] into the development of their self-concept.[31] From this model, it is reasonable to concludethat characteristics that are common to both oneself and their significant others (or in-group members) would bemore accessible.[11] Second, the previous research discussed suggests that the self-reference effect is due to somecombination of organizational, elaborative, mental cueing or evaluative properties of self-referential encoding tasks.Given that we have significant stores of knowledge about our social identities, and such collective identities providean organizational framework, it is reasonable to assume that a group-reference task would operate similar to that ofa self-reference task.[11]

In order to test these claims, Johnson and colleagues aimed to test whether the self-reference effect generalized togroup level identities. Their first study was structured to simply assess if group-reference influenced subsequentmemory. In their experiment, they used membership at a particular university as the group of reference. Theyincluded group-reference, self-reference and semantic tasks. The experiment replicated the self-reference effect,consistent with previous research. In addition, evidence for a group-reference effect was found. Group-referencedencoding produced better recall than the semantic tasks, and the level of recall from the group-referenced task wasnot significantly different from the self-referenced task.[11]

Despite finding evidence of a group-reference effect, Johnson and colleagues pointed out that people identify withnumerous groups, each with unique characteristics. Therefore, in order to reach conclusive evidence of a group-reference effect, alternative group targets need to be considered. In a second experiment by Johnson et al., the groupof reference was modified to be the family of the individual. This group has fewer exemplars than the pool ofuniversity students, and affective considerations of the family as a group should be strong. No specific instructionsor definitions were provided for family, allowing individuals to consider either the group as a whole (prototype)or specific exemplars (group). When the experiment was repeated using family as the group of reference, group-reference produced recall as much as self-reference. The mean number of recall for the group-reference was higherthan self-reference. Participants indicated that they considered both the prototype and individual exemplars whenresponding to the questions, suggesting that the magnitude of the group-reference effect might not be dependent onthe number of exemplars in the target group.[11]

Both experiments presented by Johnson et al. found evidence for the group-reference effect. However, these con-clusions are limited to the target groups of university students and family. Other research included gender (malesand females) and religion (Jewish) as the reference groups and the group-reference effect on memory was not asevident. The group-reference recall for these two groups was not significantly more advantageous than the semantictask. Questioning what characteristics of reference groups that lead to the group-reference effect, a meta-analysis ofall four group-reference conditions was performed. This analysis found that self-reference emerged as the most pow-erful encoding device; however, evidence was found to support the existence of a group-reference effect. The sizeof the reference groups and number of specific, individual exemplars was hypothesized to influence the existence ofthe group-reference effect. In addition, accessibility and level of knowledge about group members may also impactsuch an effect. So, while university students is a much larger group than family, individual exemplars may be morereadily accessible than those in a religious group. Similarly, different cognitive representations were hypothesizedto influence the group-reference effect. When a larger group is considered, people may be more likely to consider aprototype which may lead to fewer elaborations and cues later on. Smaller groups may lead to relying on the proto-type and specific exemplars.[11] Finally, desirability judgments that influence later processing may be influenced byself-reference and certain group-reference tasks.[32] Individuals may be more sensitive to evaluative implications forthe personal self and some group identities, but not others.[33]

Groups are also a major part of the self; therefore we attribute the role that different groups play in our self-conceptalso play a role in the self-reference effect.[11] We process information about group members similarly to how weprocess for ourselves.[11] Recall of remarks referencing our home and our self and group to familiarity of thoseaspects of our self.[11] Reference to the self and social group and the identity that comes along with being a part ofa social group are equally affective for memory.[11] This is especially true when the groups are small, rather thanlarge.[11]

Ultimately, the group-reference effect provides evidence to explain the tendency to notice or pay attention to andremember statements made in regard to our home when traveling in a foreign place.[11] Considering the proposal thatgroups form part of the self, this phenomenon can be considered an extension of the self-reference effect. Similarto the memorable nature of references to a person’s individual self, references to social identities are seemed to beprivileged in memory as well.[11]

Page 64: Self Reference

15.7. APPLICATIONS 59

15.7 Applications

Once the foundation of research on self-referential encoding was established, psychologists began to explore how theconcept applied to different groups of people, and connected to different phenomena.

15.7.1 Autism spectrum disorder

Individuals diagnosed with autism spectrum disorders (ASDs) can display a wide range of symptoms. Some of themost common characteristics of individuals with ASDs include impairments with social functioning, language andcommunication difficulties, repetitive behaviors and restricted interests. In addition, it is often noted that these in-dividuals are more “self-focused.” That is, they have difficulty seeing things from another’s perspective.[34] Despitebeing self-focused, though, research has shown that individuals with ASD’s often have difficulty identifying or de-scribing their emotions or the emotions of others. When asked to describe their daily experiences, responses fromindividuals on the autism spectrum tended to focus more on physical descriptions rather than mental and emotionalstates. In regards to their social interactions and behavior differences, it is thought that these individuals lack topdown control, and therefore, their bottom up decisions remain unchecked. This simply suggests that these individ-uals cannot use their prior knowledge and memory to make sense of new input, but instead react to each new inputindividually, compiling them to make a whole picture [34]

Noting the difficulty individuals with ASDs experience with self-awareness, it was thought that they might have dif-ficulty with self-related memory processes.[35] Psychologists questioned if these individuals would show the typicalself-reference effect in memory.[34] In one Depth of Processing Study, participants were asked questions about thedescriptiveness of certain stimulus words. However, unlike previous DOP studies that focused on phonemic, struc-tural, semantic and self-referential tasks, the tasks were altered for this experiment. To test the referential abilitiesof individuals with ASD’s, the encoding tasks were divided into: “the self,” asking to what extent a stimulus worddescribed oneself, “similar close other,” asking to what extent a stimulus word was descriptive of one’s best friend,“dissimilar non-close other,” asking to what extent a stimulus word was descriptive of Harry Potter, and a controlgroup that was asked to determine the number of syllables in each word. Following these encoding tasks, participantswere given thirty minutes before a surprise memory task. It was found that individuals with ASD’s had no impairmentin memory for words encoded in the syllable or dissimilar non-close other condition. However, they had decreasedmemory for words related to the self.[34]

Therefore, while research suggests that self-referentially encoded information is encoded more deeply than other in-formation, the research on individuals with ASD’s showed no advantage for memory recognition with self-referencetasks over semantic encoding tasks. This suggests that individuals with ASD’s don't preferentially encode self-relevantinformation. Psychologists have investigated the biological basis for the decreased self-reference effect among indi-viduals with Autism Spectrum Disorders and have suggested that it may be due to less specialized neural activity inthe mPFC for those individuals.[35] However, while individuals with ASD’s showed smaller self-reference effects thanthe control group, some evidence of a self-reference effect was evident in some cases. This indicates that self-referentimpairments are a matter of degree, not total absence.[34]

Lombardo and his colleagues measured empathy among individuals with ASD’s, and showed that these individualsscored lower than the control group on all empathy measures.[34] This may be a result of the difficulty for theseindividuals to understand or take the perspective of others, in conjunction with their difficulty identifying emotions.This has implications for simulation theory, because these individuals are unable to use their self-knowledge to makeconclusions about similar others.Ultimately, the research suggests that people with ASD’s might benefit from being more self-focused. The bettertheir ability to reflect on themselves, the better the can mentalize with others.[34]

15.7.2 Depression

There are three possible relations between cognitive processes and anxiety and depression. The first is whethercognitive processes are actually caused by the onset of clinically diagnosed symptoms of major depression or justgeneralized sadness or anxiousness. The second is whether emotional disorders such as depression and anxiety areable to be considered as caused by cognitions. And the third is whether different specific cognitive processes areable to be considered associates of different disorders.[36] Kovacs and Beck (1977) posited a schematic model ofdepression where an already depressed self was primed by outside prompts that negatively impacted cognitive illusionsof the world in the eye of oneself. These prompts only led participants to a more depressive series of emotions and

Page 65: Self Reference

60 CHAPTER 15. SELF-REFERENTIAL ENCODING

behavior.[37] The results from the study done by Derry and Kuiper supported Beck’s theory that a negative self-schema is present in people, especially those with depressive disorder.[9] Depressed individuals attribute depressiveadjectives to themselves more than nondepressive adjectives.[10] Those suffering from a more mild case of depressionhave trouble deciphering between the traits of themselves and others which results in a loss of their self-esteemand their negative self-evaluation. A depressive schema is what causes the negativity reported by those sufferingfrom depression.[9] Kuiper and Derry found that self-referent recall enhancement was limited only to nondepressedcontent.[9]

Generally, self-focus is association with negative emotions. In particular private self-focus is more strongly associatedwith depression than public self-focus.[38] Results from brain-imaging studies shows that during self-referential pro-cessing, those with major depressive disorder show greater activation in the medial prefrontal cortex, suggesting thatdepressed individuals may be exhibiting greater cognitive control than non-depressed individuals when processingself-relevant information.[39]

15.8 References[1] Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). Self-reference and the encoding of personal information. Journal

of Personality and Social Psychology, 35(9), 677-688. doi:10.1037/0022-3514.35.9.677

[2] Benoit, R. G., Gilbert, S. J., Volle, E., & Burgess, P. W. (2010). When I think about me and simulate you: Medial rostralprefrontal cortex and self-referential processes. Neuroimage, 50(3), 1340-1349. doi:10.1016/j.neuroimage.2009.12.091

[3] Heatherton, T. F., Macrae, C., & Kelley, W. M. (2004). What the social brain sciences can tell us about the self. CurrentDirections in Psychological Science, 13(5), 190-193. doi:10.1111/j.0963-7214.2004.00305.x

[4] Klein, S. B., Loftus, J., & Burton, H. A. (1989). Two self-reference effects: The importance of distinguishing betweenself-descriptiveness judgments and autobiographical retrieval in self-referent encoding. Journal of Personality and SocialPsychology, 56(6), 853-865. doi:10.1037/0022-3514.56.6.853

[5] Katz, A. N. (1987). Self-reference in the encoding of creative-relevant traits. Journal of Personality, 55(1), 97-120.doi:10.1111/j.1467-6494.1987.tb00430.x

[6] Markus, H., Hamill, R., & Sentis, K. P. (1987). Thinking fat: Self-schemas for body weight and the processing of weightrelevant information. Journal of Applied Social Psychology, 17(1), 50-71. doi:10.1111/j.1559-1816.1987.tb00292.x

[7] Klein, S. B., Loftus, J., & Burton, H. A. (1989). Two self-reference effects: The importance of distinguishing betweenself-descriptiveness judgments and autobiographical retrieval in self-referent encoding. Journal Of Personality And SocialPsychology, 56(6), 853-865. doi:10.1037/0022-3514.56.6.8533

[8] Mills, C. J. (1983). Sex-typing and self-schemata effects on memory and response latency. Journal of Personality andSocial Psychology, 45(1), 163-172. doi:10.1037/0022-3514.45.1.163

[9] Kuiper, N. A., & Derry, P. A. (1982). Depressed and nondepressed content self-reference in mild depressives. Journal ofPersonality, 50(1), 67-80. doi:10.1111/j.1467-6494.1982.tb00746.x

[10] Dobson, K. S., & Shaw, B. F. (1987). Specificity and stability of self-referent encoding in clinical depression. Journal ofAbnormal Psychology, 96(1), 34-40. doi:10.1037/0021-843X.96.1.34

[11] Johnson, C., Gadon, O., Carlson, D., Southwick, S., Faith, M., & Chalfin, J. (2002). Self-reference and group membership:Evidence for a group-reference effect. European Journal of Social Psychology, 32(2), 261-274. doi:10.1002/ejsp.83

[12] Dunkel, C. S. (2005). Ego-identity and the Processing of Self-relevant Information . Self and Identity, 349-359

[13] Strube, M., Berry, J. M., Lott, C., Fogelman, R., Steinhart, G., Moergen, S., & Davison, L. (1986). Self-schematicrepresentation of the Type A and B behavior patterns. Journal of Personality and Social Psychology, 51(1), 170-180.doi:10.1037/0022-3514.51.1.170

[14] Hull, J. G., & Levy, A. S. (1979). The organizational functions of the self: An alternative to the Duval andWicklundModelof self-awareness. Journal of Personality and Social Psychology, 37(5), 756-768. doi:10.1037/0022-3514.37.5.756

[15] Duval, S., & Wicklund, R. A. (1972). A theory of objective self-awareness. New York: Academic Press.

[16] Hull, J. G., Van Treuren, R. R., Ashford, S. J., Propsom, P., &Andrus, B.W. (1988). Self-consciousness and the processingof self-relevant information. Journal of Personality and Social Psychology, 54(3), 452-465. doi:10.1037/0022-3514.54.3.452

Page 66: Self Reference

15.8. REFERENCES 61

[17] Anderson, J. R. (1983). The architecture of cognition. Cambridge. MA: Harvard University Press.

[18] Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of VerbalLearning and Verbal Behavior, 11(6), 671-684. doi: 10.1016/S0022-5371(72)80001-X

[19] Klein, S. B., & Kihlstrom, J. F. (1986). Elaboration, organization, and the self-reference effect in memory. Journal ofExperimental Psychology: General, 115(1), 26-38. doi:10.1037/0096-3445.115.1.26

[20] Craik F., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experi-mental Psychology: General, 104, 268-94.

[21] Symons, C. S., & Johnson, B. T. (1997). The self-reference effect in memory: A meta-analysis. Psychological Bulletin,121(3), 371-394. doi:10.1037/0033-2909.121.3.371

[22] Thorndyke, P. W., & Hayes-Roth, B. (1979). The use schemata in the acquisition and transfer of knowledge. CognitivePsychology, 11(1), 82-106. doi: 10.1016/0010-0285(79)90005-7

[23] Klein, S. B., & Loftus, J. (1988). The nature of self-referent encoding: The contributions of elaborative and organizationalprocesses. Journal of Personality and Social Psychology, 55(1), 5-11. doi:10.1037/0022-3514.55.1.5

[24] Bennett, M., Allan, S., Anderson, J., & Asker, N. (2010). On the robustness of the group reference effect. EuropeanJournal Of Social Psychology, 40(2), 349-354.

[25] Einstein, G. O., & Hunt, R. R. (1980). Levels of processsing and organization: Additive effects of individual item andrelational processing. Journal of Experimental Psychology: Human Learning and Memory,6, 588-598.

[26] Mitchell, J. P., Banaji, M. R., & Macrae, C. (2005). The Link between social cognition and self-referential thought in themedial prefrontal cortex. Journal of Cognitive Neuroscience, 17(8), 1306-1315. doi:10.1162/0898929055002418

[27] Moran, J. M., Macrae, C. N., Heatherton, T. F., Wyland, C. L., & Kelley, W. M. (2006). Neuroanatomical evidence fordistinct cognitive and affective components of self. Journal of Cognitive Neuroscience, 18(9), 1586-1594. doi:10.1162/jocn.2006.18.9.1586

[28] Fossati, P., Hevenor, S. J., Lepage, M., Graham, S. J., Grady, C., Keightley, M. L., et al. (2004). Distributed self in episodicmemory: Neural correlates of successful retrieval of self-encoded positive and negative personality traits. Neuroimage,22(4), 1596-1604. doi: 10.1016/j.neuroimage.2004.03.034

[29] Brewer M. B. ,& Gardner, W. (1996). Who is this ‘We’? Levels of collective identity and self representations. Journal ofPersonality and Social Psychology 71, 83–93.

[30] Wright, S.C., Aron A., McLaughlin-Volpe, T., & Ropp, S. A. (1997). The extended contact effect: Knowledge of cross-group friendships and prejudice. Journal of Personality and Social Psychology, 73, 73–90.

[31] Aron, A., Aron, E., Tudor, M., & Nelson, G. (1991). Close relationships as including other and self. Journal of Personalityand Social Psychology, 60, 241–253.

[32] Ferguson, T. J., Rule, G. R., & Carlson, D. (1983). Memory for personally relevant information. Journal of Personalityand Social Psychology, 44, 251–261.

[33] Gaertner, L., Sedikides, C., & Graetz, K. (1999). In search of self-definition: Motivational primacy of the individual self,motivational primacy of the collective self, or contextual primacy? Journal of Personality and Social Psychology, 76, 5–18.

[34] Lombardo, M. V., Barnes, J. L., Wheelwright, S. J., Cohen-Baron, S. (2007). Self-referential cognition and empathy inautism. PLoS ONE, 2(9), e883. doi:10.1371/journal.pone.0000883

[35] Henderson, H. A., Zahka, N. E., Kojkowski, N. M., Inge, A. P., Schwartz, C. B., Hileman, C. M., & ... Mundy, P. C.(2009). Self-referenced memory, social cognition, and symptom presentation in autism. Journal of Child Psychology andPsychiatry, 50(7), 853-861. doi:10.1111/j.1469-7610.2008.02059.x

[36] Strauman, T. J. (1989). Self-discrepancies in clinical depression and social phobia: Cognitive structures that underlieemotional disorders?. Journal of Abnormal Psychology, 98(1), 14-22. doi:10.1037/0021-843X.98.1.14

[37] Kovacs, M., & Beck, A. T. (1977). Cognitive-affective processes in depression. In C. E. Izard (Ed.), Emotions andpsychopathology (pp.79-107). New York: Plenum Press.

[38] Mor, N., & Winquist, J. Self-focused attention and negative affect: A meta-analysis. Psychological Bulletin, 128(4), 636-662. doi: 10.1037/0033-2909.128.4.638

[39] Lemogne, C., le Bastard, G., Mayberg, H., Volle, E., Bergouignan, L., Lehericy, S., et al. (2009). In search of thedepressive self: extended medial prefrontal network during self-referential processing in major depression. Social Cognitiveand Affective Neuroscience, 4, 305-312.

Page 67: Self Reference

Chapter 16

Self-referential humor

Self-referential humor or self-reflexive humor is a type of comedic expression[1] that—either directed towardsome other subject, or openly directed toward itself—intentionally alludes to the very person who is expressing thehumor in a comedic fashion, or to some specific aspect of that same comedic expression. Self-referential humorexpressed discreetly and surrealistically is a form of bathos. In general, self-referential humor often uses hypocrisy,oxymoron, or paradox to create a contradictory or otherwise absurd situation that is humorous to the audience.[2]

Self-referential humor is sometimes combined with breaking the fourth wall to explicitly make the reference directlyto the audience, or make self-reference to an element of the medium that the characters should not be aware of.Old Comedy of Classical Athens is held to be the first—in the extant sources—form of self-referential comedy.Aristophanes, whose plays form the only remaining fragments of Old comedy, used fantastical plots, grotesque andinhuman masks and status reversals of characters to slander prominent politicians and court his audience’s approval.[3]

16.1 Examples

RAS syndrome refers to the redundant use of one or more of the words that make up an acronym or initialism withthe abbreviation itself, thus in effect repeating one or more words. However, “RAS” stands for Redundant AcronymSyndrome; therefore, the full phrase yields “Redundant Acronym Syndrome syndrome” and is self-referencing in acomical manner. It also reflects an excessive use of TLAs (Three Letter Acronyms).Hippopotomonstrosesquipedaliophobia is a fear of long words.Meta has become colloquially used to refer, particularly in art, to something that is self-referential. Popularised byDouglas Hofstadter who wrote several books on himself and the subject of self-reference, often over 700 pages, itis the subject of a comical six-word biography created by Randall Munroe, “I'm So Meta, Even This Acronym,” theacronym of which forms “ISMETA” which would then complete the sentence. Meta-jokes are a popular form ofhumour.

16.2 See also

• Self-reference

• Self-referential humour

• Indirect self-reference

• In-joke

• Intertextuality

• Meta-humor

62

Page 68: Self Reference

16.3. REFERENCES 63

16.3 References[1] “Sentences about Self-Reference and Recurrence”. .vo.lu. Retrieved 2012-08-21.

[2] Self referential humor

[3] Alan Hughes; Performing Greek Comedy (Cambridge, 2012)

Page 69: Self Reference

Chapter 17

Tupper’s self-referential formula

Tupper’s self-referential formula is a formula defined by Jeff Tupper that, when graphed in two dimensions ata very specific location in the plane, can be “programmed” to visually reproduce the formula itself. It is used invarious math and computer science courses as an exercise in graphing formulae. Although it is colloquially known asa "self-referential formula”, this is actually a misnomer,[1] because the image does not encode the constant K whichis external data, and Tupper himself did not describe his formula that way.[2]

The formula was first published in his 2001 SIGGRAPH paper that discusses methods related to the GrafEq formula-graphing program he developed.The formula is an inequality defined by:

1

2<

⌊mod

(⌊ y

17

⌋2−17⌊x⌋−mod(⌊y⌋,17), 2

)⌋,

where ⌊·⌋ denotes the floor function, and mod is the modulo operation.Let k equal the following 543-digit integer:

960 939 379 918 958 884 971 672 962 127 852 754 715 004 339 660 129 306 651 505 519 271702 802 395 266 424 689 642 842 174 350 718 121 267 153 782 770 623 355 993 237 280 874 144307 891 325 963 941 337 723 487 857 735 749 823 926 629 715 517 173 716 995 165 232 890 538221 612 403 238 855 866 184 013 235 585 136 048 828 693 337 902 491 454 229 288 667 081 096184 496 091 705 183 454 067 827 731 551 705 405 381 627 380 967 602 565 625 016 981 482 083418 783 163 849 115 590 225 610 003 652 351 370 343 874 461 848 378 737 238 198 224 849 863465 033 159 410 054 974 700 593 138 339 226 497 249 461 751 545 728 366 702 369 745 461 014655 997 933 798 537 483 143 786 841 806 593 422 227 898 388 722 980 000 748 404 719

If one graphs the set of points (x, y) in 0 ≤ x < 106 and k ≤ y < k + 17 satisfying the inequality given above, theresulting graph looks like this (note that the axes in this plot have been reversed, otherwise the picture comes outupside-down):

The formula itself is a general-purpose method of decoding a bitmap stored in the constant k, so it could actually beused to draw any other image. When applied to the unbounded positive range 0 ≤ y, the formula tiles a vertical swathof the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmapdepicts the drawing formula itself, but this is not remarkable, since other slices depict all other possible formulae that

64

Page 70: Self Reference

17.1. SEE ALSO 65

might fit in a 17-pixel-tall bitmap. Tupper has disseminated, via email, extended versions of his original formula thatrule out all but one slice (, , ).The constant k is a simple monochrome bitmap image of the formula treated as a binary number and multiplied by17. If k is divided by 17, the least significant bit encodes the upper-right corner (k, 0); the 17 least significant bitsencode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on,forming the image of the formula.

17.1 See also• Recursion

• Quine (computing)

• Strange loop

17.2 References• Tupper, Jeff. “Reliable Two-Dimensional Graphing Methods for Mathematical Formulae with Two Free Vari-ables” http://www.dgp.toronto.edu/people/mooncake/papers/SIGGRAPH2001_Tupper.pdf

• Weisstein, Eric W. “Tupper’s Self-Referential Formula.” FromMathWorld—AWolframWeb Resource. http://mathworld.wolfram.com/TuppersSelf-ReferentialFormula.html

• Bailey, D. H.; Borwein, J. M.; Calkin, N. J.; Girgensohn, R.; Luke, D. R.; and Moll, V. H. ExperimentalMathematics in Action. Natick, MA: A. K. Peters, p. 289, 2006. http://crd-legacy.lbl.gov/~{}dhbailey/dhbpapers/hpmpd.pdf

• “Self-Answering Problems.” Math. Horizons 13, No. 4, 19, Apr. 2006

• Wagon, S. Problem 14 in http://stanwagon.com/wagon/Misc/bestpuzzles.html

[1] Narayanan, Arvind. “Tupper’s Self-Referential Formula Debunked”. Retrieved 20 February 2015.

[2] “How does Tupper’s self-referential formula work?". Retrieved 20 February 2015.

17.3 External links• Jeff Tupper official site

• Extensions of Tupper’s original self-referential formula

• TupperPlot, an implementation in JavaScript

• Tupper self referential formula, an implementation in Python

• The Library of Babel function, a detailed explanation of the workings of Tupper’s self-referential formula

• Tupper’s Formula Tools, an implementation in JavaScript

Page 71: Self Reference

Chapter 18

Universal set

For other uses, see Universal set (disambiguation).

In set theory, a universal set is a set which contains all objects, including itself.[1] In set theory as usually formulated,the conception of a universal set leads to a paradox (Russell’s paradox) and is consequently not allowed. However,some non-standard variants of set theory include a universal set. It is often symbolized by the Greek letter xi.

18.1 Reasons for nonexistence

Zermelo–Fraenkel set theory and related set theories, which are based on the idea of the cumulative hierarchy, donot allow for the existence of a universal set. Its existence would cause paradoxes which would make the theoryinconsistent.

18.1.1 Russell’s paradox

Russell’s paradox prevents the existence of a universal set in Zermelo–Fraenkel set theory and other set theories thatinclude Zermelo's axiom of comprehension. This axiom states that, for any formula φ(x) and any set A, there existsanother set

{x ∈ A | φ(x)}

that contains exactly those elements x of A that satisfyφ . If a universal set V existed and the axiom of comprehensioncould be applied to it, then there would also exist another set {x ∈ V | x ̸∈ x} , the set of all sets that do not containthemselves. However, as Bertrand Russell observed, this set is paradoxical. If it contains itself, then it should notcontain itself, and vice versa. For this reason, it cannot exist.

18.1.2 Cantor’s theorem

A second difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power setis a set of sets, it would automatically be a subset of the set of all sets, provided that both exist. However, this conflictswith Cantor’s theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality thanthe set itself.

18.2 Theories of universality

The difficulties associated with a universal set can be avoided either by using a variant of set theory in which theaxiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set.

66

Page 72: Self Reference

18.3. NOTES 67

18.2.1 Restricted comprehension

There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set V doesexist (and V ∈ V is true). In these theories, Zermelo’s axiom of comprehension does not hold in general, and theaxiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set isnecessarily a non-well-founded set theory.The most widely studied set theory with a universal set is Willard Van Orman Quine’s New Foundations. AlonzoChurch and Arnold Oberschelp also published work on such set theories. Church speculated that his theory mightbe extended in a manner consistent with Quine’s,[2] but this is not possible for Oberschelp’s, since in it the singletonfunction is provably a set,[3] which leads immediately to paradox in New Foundations.[4] The most recent advancesin this area have been made by Randall Holmes who published an online draft version of the book Elementary SetTheory with a Universal Set in 2012.[5]

18.2.2 Universal objects that are not sets

Main article: Universe (mathematics)

The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory, particularly because mostversions of this theory do allow the use of quantifiers over all sets (see universal quantifier). One way of allowingan object that behaves similarly to a universal set, without creating paradoxes, is to describe V and similar largecollections as proper classes rather than as sets. One difference between a universal set and a universal class is thatthe universal class does not contain itself, because proper classes cannot be elements of other classes. Russell’sparadox does not apply in these theories because the axiom of comprehension operates on sets, not on classes.The category of sets can also be considered to be a universal object that is, again, not itself a set. It has all sets aselements, and also includes arrows for all functions from one set to another. Again, it does not contain itself, becauseit is not itself a set.

18.3 Notes[1] Forster 1995 p. 1.

[2] Church 1974 p. 308. See also Forster 1995 p. 136 or 2001 p. 17.

[3] Oberschelp 1973 p. 40.

[4] Holmes 1998 p. 110.

[5] http://math.boisestate.edu/~{}holmes/

18.4 References• Alonzo Church (1974). “Set Theory with a Universal Set,” Proceedings of the Tarski Symposium. Proceedingsof Symposia in Pure Mathematics XXV, ed. L. Henkin, American Mathematical Society, pp. 297–308.

• T. E. Forster (1995). Set Theory with a Universal Set: Exploring an Untyped Universe (Oxford Logic Guides31). Oxford University Press. ISBN 0-19-851477-8.

• T. E. Forster (2001). “Church’s Set Theory with a Universal Set.”• Bibliography: Set Theory with a Universal Set, originated by T. E. Forster and maintained by Randall Holmesat Boise State University.

• Randall Holmes (1998). Elementary Set theory with a Universal Set, volume 10 of the Cahiers du Centre deLogique, Academia, Louvain-la-Neuve (Belgium).

• Arnold Oberschelp (1973). “Set Theory over Classes,” Dissertationes Mathematicae 106.• WillardVanOrmanQuine (1937) “NewFoundations forMathematical Logic,”AmericanMathematicalMonthly44, pp. 70–80.

Page 73: Self Reference

68 CHAPTER 18. UNIVERSAL SET

18.5 External links• Weisstein, Eric W., “Universal Set”, MathWorld.

Page 74: Self Reference

18.6. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 69

18.6 Text and image sources, contributors, and licenses

18.6.1 Text• Autogram Source: https://en.wikipedia.org/wiki/Autogram?oldid=643097343 Contributors: Vikreykja, Discospinster, Remuel, Lec-

tonar, Angr, Shreevatsa, Eleassar777, Clemmy, Darkday, Quuxplusone, SmackBot, Chris the speller, Colonies Chris, MarshBot, Win-ndm31, Scrathycheese, Bedelato, LilHelpa, Novonium, Foobarnix, Faizan, Radeklew and Anonymous: 5

• Autological word Source: https://en.wikipedia.org/wiki/Autological_word?oldid=659682745 Contributors: Wetman, Kwamikagami,Caesura, RussBot, SigmaEpsilon, McGeddon, Nbarth, Hgilbert, JorisvS, Loadmaster, Maolmhuire, Rauketman, Davidhorman, Mdot-ley, TomS TDotO, Munci, Sccampion, DVTB, JWilk, ArlenCuss, J.Gowers, JasonAQuest, Rui Gabriel Correia, XLinkBot, Bradv, Ad-dbot, Jarble, Jss367, AnomieBOT, Omnipaedista, Edinwiki, EmausBot, Ὁ οἶστρος, Intheeventofstructuralfailure, Picklebobdogflog,FiveColourMap, Keylimepieman72, Kaleidoscopeflux, Kirovballet, Ko nick ok and Anonymous: 21

• Circular reference Source: https://en.wikipedia.org/wiki/Circular_reference?oldid=662886062 Contributors: Nixdorf, Furrykef, Al-tenmann, Rfc1394, Frpcad, Jason Quinn, Foobar, KirbyMeister, Andreas Kaufmann, Paul August, Violetriga, Patsw, David Haslam,Ruud Koot, Encyclopedist, Elvarg, Josh Parris, Sjö, Rjwilmsi, Syced, DevastatorIIC, Tardis, Chobot, Taejo, Sikon, Ihope127, Garion96,AndrewWTaylor, SmackBot, Tree Biting Conspiracy, Dfletter, JHunterJ, Defireman, Dreftymac, Tawkerbot2, Heartofgoldfish, Cxw,INVERTED, Sam Van Kooten, Obiwankenobi, Themania, J.delanoy, Orange112, DarkFalls, Jeepday, Spinningspark, SieBot, Kaypoh,Addbot, Glane23, Yobot, 2D, Obscuranym, Wikivol, Mylife2702, Wcoole, ChildofMidnight, Pinethicket, Kapgains, Frank.sebastia, Sen-ator2029, Pokbot, ClueBot NG, A50586248, Helpful Pixie Bot, Z.moayeri and Anonymous: 50

• Corecursion Source: https://en.wikipedia.org/wiki/Corecursion?oldid=675326208 Contributors: The Anome, Malcohol, Greenrd, Walt-pohl, Rich Farmbrough, Ruud Koot, Seliopou, Gurch, Roboto de Ajvol, Hairy Dude, Piet Delport, SmackBot, Imz, Betacommand,Vvarkey, Nbarth, Furby100, Dreadstar, Lambiam, Macha, Jenovazero, PhilKnight, Rhwawn, Gwern, R'n'B, Icktoofay, Thefrob, Classi-calecon, Addbot, Ghettoblaster, Yobot, Jordsan, Obscuranym, Denispir, AnomieBOT, Citation bot, VladimirReshetnikov, Citation bot1, WillNess, H3llBot, Helpful Pixie Bot, BattyBot, ChrisGualtieri, Pimp slap the funk and Anonymous: 14

• Fumblerules Source: https://en.wikipedia.org/wiki/Fumblerules?oldid=667753610 Contributors: Andrew Moylan, Denelson83, Auric,JerryFriedman, Smjg, RJFJR, Angr, Veratien, GregorB, Marudubshinki, Wavelength, RussBot, Gaius Cornelius, SmackBot, McGeddon,Afa86, Bluebot, CSWarren, Nbarth, Mtmelendez, Dicklyon, CmdrObot, Headbomb, Sluzzelin, Spellmaster, Lightmouse, SchreiberBike,Qwfp, Citation bot 1, Full-date unlinking bot, Syniq, Iceten, ☼ and Anonymous: 16

• Hofstadter’s law Source: https://en.wikipedia.org/wiki/Hofstadter’{}s_law?oldid=673087162 Contributors: Angela, Hyacinth, Carlos-suarez46, Cornellier, Siroxo, Rchandra, Paul August, Kwdavids, Superabo, Omphaloscope, Shreevatsa, JFG,Marudubshinki, Deltabeignet,Crzrussian, JHMM13, Durin, The wub, YurikBot, Hairy Dude, RussBot, Snaxe920, Finell, SmackBot, Thumperward, Ataricodfish,Nbarth, ZakTek, Kindall, Cybercobra, Esrever, Poobslag, Zymurgy, Misterbones, Rusco, Widefox, Silver seren, Klow, WikkanWitch,KConWiki, TXiKiBoT, EvanCarroll, Rei-bot, Timb66, Mr. Granger, Phyte, GreenAsJade, Trivialist, Ospix, XLinkBot, Cheapskate08,MystBot, Addbot, Ginosbot, Lightbot, Kieronshaw, Luckas-bot, Yobot, SassoBot, AnotherOnymous, Ivan Shmakov, Skyerise, RedBot,Abibibo12, BeyondMy Ken, ZéroBot, Ntfs.hard, Denbosch, Vergilden, DependableSkeleton, Parcly Taxel, Tr00rle, G9germai, BG19bot,CsDix, JimeoWan, Michaelpgee, Monkbot and Anonymous: 34

• I (pronoun) Source: https://en.wikipedia.org/wiki/I_(pronoun)?oldid=667151789 Contributors: Boson, Jerzy, Robbot, Wolfkeeper, Dis-cospinster, Dbachmann, Bender235, Kwamikagami, Mairi, Max Naylor, HenryLi, Georgia guy, Tabletop, TAKASUGI Shinji, Edison,Rjwilmsi, Quiddity, RussBot, NawlinWiki, Joel7687, Canley, Hayden120, SmackBot, Da Vynci, Onorem, Addshore, RedHillian, Kukini,Marco polo, Da dinges~enwiki, Joseph Solis in Australia, BeenAroundAWhile, FilipeS, Shamesspwns, Tkynerd, Thijs!bot, Mawfive, So-breira, Milton Stanley, MikeLynch, JAnDbot, Deflective, Husond, The Transhumanist, Alastair Haines, R'n'B, TomS TDotO, Mike V,Bonadea, 28bytes, RPlunk2853, Rei-bot, Enigmaman, Dirkbb, Why Not A Duck, SieBot, M.thoriyan, Janvanhorn, Mr. Granger, Sopo-raeternus, ClueBot, The Thing That Should Not Be, Sonarpulse, Uncle Milty, Niceguyedc, ºRYueli'o, Alexbot, Rebma6776, Amsaim,Novjunulo, RightGot, Rror, Dthomsen8, Rreagan007, Facts707, Addbot, Tanhabot, LaaknorBot, Glane23, 5 albert square, Zappa123123,Luckas-bot, Deddly, AnomieBOT, Jim1138, Rbgij, Tintero, FrescoBot, Kenilworth Terrace, Pinethicket, Duoduoduo, Reaper Eternal,Vik101, Wikipelli, Josve05a, Mathman1699, A930913, RONRON61093, Wayne Slam, Tolly4bolly, ClueBot NG, Gaioa, Chrisminter,Ecw2nd, Helpful Pixie Bot, AndrewMamba, Mr. Credible, R369, BG19bot, Hewhoamareismyself, Man nual, Blackrock121, Minsbot,BattyBot, Darylgolden, AK456, HackerGrapes, Imtomfromuk, Suekohxo, Wyliecoyote1990, Drewceglinski, AnalysisStagey, Soupy2u,Giannagirl, Madhuranisharma, FridgeYpskiiponp, Prats7845, Teacher Dan777, Teivel Mot and Anonymous: 79

• Impredicativity Source: https://en.wikipedia.org/wiki/Impredicativity?oldid=595786161 Contributors: The Anome, Michael Hardy,GTBacchus, Jitse Niesen, Ruakh, Adam78, Kimbly, Ben Standeven, Spayrard, Mike Schwartz, Txa, Tsirel, Ruud Koot, Hairy Dude,Gaius Cornelius, Ott2, SmackBot, Gailtb, BiT, Mhss, Wvbailey, CmdrObot, Raysonho, Sir Vicious, CBM, Myasuda, Gregbard, Alai-bot, Goldenrowley, Wasell, Omicron18, Tbleher, MetsBot, JadeNB, Tomaxer, Addbot, Yobot, Eumolpo, LilHelpa, Darkink, Tijfo098,Helpful Pixie Bot, Akim.demaille, Aubreybardo, Vieque and Anonymous: 11

• Indirect self-reference Source: https://en.wikipedia.org/wiki/Indirect_self-reference?oldid=557241371Contributors: TheAnome, Danny,Dominus, Chinju, Tim Retout, Charles Matthews, Gro-Tsen, Grunt, DanielCristofani, Philthecow, Curpsbot-unicodify, SmackBot, Ph-ysis, Dreftymac, Outriggr, A876, Blaisorblade, IanOsgood, Gwern, Ztobor, Erik9bot, Gamewizard71, Yesman23, ClueBot NG andAnonymous: 16

• Liar paradox in early Islamic tradition Source: https://en.wikipedia.org/wiki/Liar_paradox_in_early_Islamic_tradition?oldid=522603451Contributors: Jayjg, Avenue, Woohookitty, InverseHypercube, Hmains, Gregbard, Ericoides, Mbz1, Pollinosisss, RjwilmsiBot, Qrsdogg,JRheic, Helpful Pixie Bot and Anonymous: 1

• Non-well-founded set theory Source: https://en.wikipedia.org/wiki/Non-well-founded_set_theory?oldid=647862764Contributors: TheAnome, Dominus, Bcrowell, Chinju, Charles Matthews, Babbage, Giftlite, Bogdanb, Lethe, David Sneek, EmilJ, Oleg Alexandrov, Gre-gorB, Kazrak, R.e.b., YurikBot, Benja, Trovatore, Crasshopper, FlashSheridan, Commander Keane bot, Makyen, JRSpriggs, CBM,Myasuda, WinBot, David Eppstein, Gwern, Barraki, JohnBlackburne, Taggard, Mild Bill Hiccup, SchreiberBike, Addbot, Obscuranym,AnomieBOT, Citation bot 1, Tkuvho, We hope, AManWithNoPlan, Tijfo098, George Ponderevo, Brad7777 and Anonymous: 10

• Recursion Source: https://en.wikipedia.org/wiki/Recursion?oldid=677154277 Contributors: Damian Yerrick, AxelBoldt, The Anome,Tarquin, Taw, Grouse, Andre Engels, Eclecticology, XJaM, TobyBartels, FubarObfusco,Miguel~enwiki, Boleslav Bobcik,Mjb, Youandme,

Page 75: Self Reference

70 CHAPTER 18. UNIVERSAL SET

Stevertigo, Quintessent, Patrick, Michael Hardy, Mic, Ixfd64, Graue, TakuyaMurata, Rochus, Minesweeper, Pcb21, Tregoweth, Stevenj,JWSchmidt, Александър, Salsa Shark, Poor Yorick, Ghewgill, Mxn, Hashar, Revolver, Dcoetzee, Dino, Dysprosia, Malcohol, Greenrd,Jogloran, Wik, Furrykef, Hyacinth, Head, Wernher, Jonhays0, Khym Chanur, Rls, Jeanmichel~enwiki, Ldo, Robbot, Ruinia, Boffy b,Scarlet, Donreed, Bernhard Bauer, Altenmann, Chancemill, Gandalf61, Orthogonal, Ashley Y, DHN, Wlievens, Hadal, Netjeff, Diberri,Tobias Bergemann, David Gerard, Jimpaz, SimonMayer, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Kim Bruning, Wolfkeeper,Vfp15, Itay2222~enwiki, Wikiwikifast, Guanaco, Sundar, Khalid hassani, DemonThing, Knutux, Antandrus, Beland, Robert Brockway,AndrewKeenanRichardson, Icairns, Lumidek, Derek Parnell, Jh51681, Sonett72, Ratiocinate, Andreas Kaufmann, Zondor, Mernen, Alki-var, Freakofnurture, Slady, Lifefeed, Discospinster, Guanabot, Leibniz, Antaeus Feldspar, Deelkar, Paul August, Crtrue, PutzfetzenORG,SgtThroat, Noren, Bobo192, Nigelj, Marco Polo, Ray Dassen, Blonkm, R. S. Shaw, Obradovic Goran, Officiallyover, GatesPlusPlus,Jumbuck, Liao, Jezmck, Tabor, Yamla, MarkGallagher, Echuck215, Malo, ReyBrujo, Dominic, DV8 2XL, Mattbrundage, Kenyon, OlegAlexandrov, Crosbiesmith, Feezo, Weyes, Cyclotronwiki, Bjones, Linas, Mindmatrix, Camw, David Haslam, PoccilScript, Oliphaunt,Bkkbrad, Ruud Koot, Robertwharvey, Srborlongan, Ralfipedia, DeweyQ, Marudubshinki, GSlicer, SqueakBox, Graham87, Qwertyus,Kbdank71, TobyJ, Jclemens, Paul13~enwiki, Rjwilmsi, Koavf, Quiddity, Salix alba, Robmods, Toresbe, Windchaser, Mathbot, Nihiltres,Itinerant1, Fragglet, NekoDaemon, RexNL, Alvin-cs, BMF81, Mongreilf, Gwernol, Kakurady, Uriah923, Wavelength, RobotE, BillG,Jlittlet, RussBot, Michael Slone, Cmore, Piet Delport, Stephenb, DragonHawk, Deskana, Catamorphism, Nick, Jamesmcguigan, Larrylaptop, MarkSG, Harry Mudd, Dbfirs, The divine bovine, Fang Aili, JoanneB, WikiFew, That Guy, From That Show!, Robertd, Smack-Bot, RDBury, InverseHypercube, McGeddon, Unyoyega, Wegesrand, Jagged 85, Firstrock, Cachedio, NickGarvey, Persian Poet Gal,Thumperward, Neurodivergent, Stevage, HubHikari, Nbarth, Darth Panda, Signed in, Quaque, GeeksHaveFeelings, Jahiegel, Tamfang,Jefffire, Writtenright, Nixeagle, Snowmanradio, JonHarder, Caseydk, Jon Awbrey, Tethros, John Reid, TenPoundHammer, Lambiam,Derek farn, Agradman, Dlibennowell, MagnaMopus, Malixsys, Miketomasello, 16@r, A. Parrot, Remigiu, Davemcarlson, Mets501, Hil-verd, Tawkerbot2, Shirahadasha, Wolfdog, Gamma57, CRGreathouse, Ahy1, CBM, INVERTED, Myasuda, Gregbard, Dragon’s Blood,Tawkerbot4, DumbBOT, Thijs!bot, Epbr123, N5iln, Mojo Hand, Zyrxil, RobHar, Uruiamme, Escarbot, PhiLiP, Thadius856, SpongeSe-bastian, AntiVandalBot, Bm gub, Kaini, JAnDbot, Kaobear, Albany NY, Hut 8.5, Greensburger, Gavia immer, Acroterion, JNW, James-BWatson, Soulbot, Karl432, David Eppstein, Bwildasi, Slimeknight, Falcor84, Bitbit, Flaxmoore, Dennisthe2, Prgrmr@wrk, Meatbites,Patar knight, Trusilver, Milan95, Geehbee, Sanjay742, MONODA, Jdoubleu, Coppertwig, Chiswick Chap, Joshua Issac, Lxix77, Tigger-jay, Deor, Jehan60188, Yugsdrawkcabeht, Technopat, Zurishaddai, Una Smith, Ferengi, Martin451, BotKung, Hyrulio, SpecMode, Jesin,Sliskisty, Anishsane, Plusdo, Wassamatta, MCTales, Mallerd, Jantaro, Dogah, Ivan Štambuk, AlphaPyro, Malcolmxl5, Ham Pastrami,The.ravenous.llama, Taemyr, Todoslocos, Prestonmag, PhilMacD, Thehotelambush, Knavex, AlanUS, CBM2, Rinconsoleao, Escape Or-bit, Classicalecon, Luatha, Ricklaman, SlackerMom, ClueBot, Pi zero, SuperHamster, Boing! said Zebedee, Alpcr, Mijo34, DragonBot,BobManPerson, Vanmaple, M4gnum0n, Lantzy, Cenarium, Hidro, Botsjeh, Resuna, ChrisHodgesUK, Aoe3lover, Franklin.vp, Doriftu,XLinkBot, Marc van Leeuwen, Tombraider007, Ost316, Avoided, WikHead, Ziggy Sawdust, Addbot, Laudan08, Some jerk on theInternet, Fluffernutter, Watzit, Mohamed Magdy, Download, CarsracBot, NittyG, AnnaFrance, Favonian, Ozob, Tide rolls, Jarble, We-ganwock, Legobot, Luckas-bot, Yobot, Systemizer, Rockfan.by, Maldrasen, Obscuranym, Pcap, KamikazeBot, MJM74, AnomieBOT,Materialscientist, Maarwaan, RobertEves92, Citation bot, Obersachsebot, Xqbot, Apothecia, Zargontapel, Hydrated Wombat, Dushy-com, Shadowjams, Kamitsaha, Constructive editor, Spazturtle, Prari, Altg20April2nd, Thayts, Skychildandsonofthesun, OgreBot, Drjaye, DrilBot, Pinethicket, Σ, Coolaery, Xxx3xxx, Tachophile, Varmin, MusicNewz, عقیل ,کاشف Jonkerz, ArbitUsername, Genezis-tan, Benimation, Aoidh, Reaper Eternal, Suffusion of Yellow, Tbhotch, Reach Out to the Truth, Acu192, AleHitch, EmausBot, Kyxzme,Avenue X at Cicero, JeffreyAylesworth, ScottyBerg, RA0808, IceMarioman, Wikipelli, MithrandirAgain, PiemanLK, FinalRapture,Nightsideoflife, Coasterlover1994, Cookiefonster, L Kensington, Chewings72, Scientific29, Vikram360, Sven Manguard, Steveswikied-its, ClueBot NG, Josephshanak, JohnsonL623, Fourmi volage, Brainbelly, Snotbot, Frietjes, Parcly Taxel, Sage321, MerlIwBot, HelpfulPixie Bot, Barravian, Kinaro, Picklebobdogflog, Siddhesh33, Maharshi91, LionelTabre, Ashwiniborse, Mark Arsten, CottontailOfChrist,Altaïr, Aw.alatiqi, Mushi no onna, ChiisaiTenshi, Davidfreesefan23, Cheetahs1990, Khazar2, Tony Heffernan, Mogism, TalhaIrfanKhan,Lugia2453, Sriharsh1234, Brirush, Johnnypeebuckets, A.entropy, Lyxkg007, Carrot Lord, Pdecalculus, TheWisestOfFools, Vande957,Pizzakingme, Cyborg1981, Abhikpal2509, Aa508186 and Anonymous: 624

• Recursive acronym Source: https://en.wikipedia.org/wiki/Recursive_acronym?oldid=671305447 Contributors: Damian Yerrick, Axel-Boldt, Kpjas, Bryan Derksen, Zundark, Tarquin, Guppie, Wayne Hardman, Merphant, Zoe, R Lowry, Leandrod, Frecklefoot, Patrick,Michael Hardy, Tim Starling, Wapcaplet, Delirium, Egil, LittleDan, Ugen64, Jordi Burguet Castell, Fader, Stephenw32768, TedAnder-son, Dcoetzee, Lfh, Magnus.de, David Latapie, Vincent Ramos, Kaare, Sweety Rose, Furrykef, Tjdw, Khym Chanur, Rls, Trent, Ldo,Aleph4, Sjorford, Dale Arnett, Noldoaran, Donreed, Altenmann, Puckly, Meelar, Qwm~enwiki, Wereon, Tobias Bergemann, DavidGerard, Alerante, Smjg, Nickdc, Spencer195, Bfinn, Lefty, Fleminra, Rev3rend, Skagedal, Jason Quinn, Pne, Joseph Dwayne, Stevi-etheman, Leonard Vertighel, Farside~enwiki, Brooker, Rogerzilla, Nickptar, Sonett72, Ashmodai, M1ss1ontomars2k4, RandalSchwartz,Mike Rosoft, Felix Wiemann, Izwalito~enwiki, Rbk, Night Gyr, ZeroOne, Fataltourist, Livajo, Devil Master, John Vandenberg, MI-Talum, Nk, Minghong, Towel401, Sanmartin, Jason One, Jumbuck, Autopilots, Mithent, Anthony Appleyard, Jamyskis, Jeltz, Jezmck,Trrutter, Thezulu, Swift, Idont Havaname, JK the unwise, Velella, BBird, Evil Monkey, Noamtm, MIT Trekkie, Kazvorpal, Falcorian,Simetrical, Justinlebar, Samsoncity, LoopZilla, MattGiuca, Hdante, Ravidgemole, Dedalus, Radiant!, Kesla, Frostyservant, Reisio, Av-ochelm, Quiddity, Wahkeenah, Bhadani, Wctaiwan, Bennie Noakes, Tedder, OpenToppedBus, Jphofmann, King of Hearts, Chobot,Visor, Kakurady, EamonnPKeane, Borgx, Personman, Tnova4, SpuriousQ, Manop, Barefootguru, Sikon, RussNelson, Pafciu, Anomalo-caris, Sparklejess, Test-tools~enwiki, Mkouklis, Tim Goodwyn, FlyingPenguins, EEMIV, Kyle Barbour, BraneJ, Arundel, Kcrca, Ozaru,Simon80, Urger48400, MCB, Lt-wiki-bot, Robotico, Closedmouth, Willy on Wheels (on wheels (on wheels (on wheels (on wheels)))),Mkochsch, Mpinck, Rmolina, Ergosteur, NetRolller 3D, IslandHopper973, SmackBot, Pasamio, Unyoyega, Renku, Gilliam, Brianski,Mnjul, Fetofs, Unknownj, ViceroyInterus, OmegaX123, Thumperward, Wmyork, Nbarth, Portnadler, GeeksHaveFeelings, Init~enwiki,Petermr, Hippo43, -Barry-, Tompsci, TCorp, Kuzaar, SashatoBot, Lambiam, Jotamar, Soap, Bulletproof~enwiki, Concept2, Michaelmiceli, 16@r, Saganatsu, TimTL, Royk, Dave Runger, Twincast, Ellfaz, Nicolasconnault, Dgw, Outriggr, Valodzka, Mikebrand, Meta-cosm, Djg2006, Bluedevil04, Can't breakmy stride, Rob247, Thijs!bot, Calvinballing, MCB Studios, Colin Rowat, Headbomb, Jojan, Oth-erMichael, Bobblehead, Davidhorman, Uruiamme, NintendoManiac 64, Jj137, Austinbeam, Admc~enwiki, Albany NY, Transce080, Pe-dro, Soulbot, Jancikotuc, David Eppstein, Frotz, Dan Pelleg, Psym, Veronica Rato, CPKS, Sire22, FDD, Rrostrom, Adavidb, Jscroft, Mc-Sly, Sullivan.t, UnwashedMeme, Yengkit19, ChristianKarlsson.se~enwiki, Joshua Issac, Ajfweb, VolkovBot, Derekbd, Stevenworr, Mer-curywoodrose, UndZiggy, Rei-bot, Dictouray, Betrion, Claidheamohmor, Walkerboh1213, ETO Buff, Billgordon1099, Netarus, Cnilep,Saom, Leirith, Loyeyoung, Poecilia Reticulata, Onemoreliam, Asiliea, Yerpo, Byrialbot, T24G, Eouw0o83hf, Svick, Dave Rebecca,Diablomarcus, Alksentrs, Number774, Callmederek, Trivialist, Joepnl, Kopipe, Pristinemog, Ozfest, Koroll, Willisis2, Addbot, Otterath-ome, Some jerk on the Internet, Metalwario, Ftwhtwd, Sthommes, Untitledmind72, Scrunge999, Sebastiannielsen, DaveChild, Legobot,Luckas-bot, Yobot, AnomieBOT, RobertEves92, Vsonline4u, Antonind, ACKirbachu, Vaywatch, Nasa-verve, FrescoBot, Name4wiki,Akshatrathi294, Rockychap, Yashlondhe, Viktor Belash, Beao, GranBurguesa, RJ-king, Richharding1, Megya, Abckookooman, Phle-

Page 76: Self Reference

18.6. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 71

gat, Tomchen1989, Beyond My Ken, Fitoschido, Phirseatthi, Haloman800, Flashpirate, LtCmdrData, Kenny Strawn, ClueBot NG,Adamwong246, Vincentpants, B.dyck, Widr, Sandeep Kumar Ruhil, Ishu991, Tkbx, Bigwigwam, TobyGoodwin, Kbog, Aekquy, 3930K,Jackstupz, Bever, Augustfifth, Benkax, Paul2520, Zeitan84, ScotXW, Chr1, Highway 231, GrahamCracker325, Day Wiles Kachow,0800urs~enwiki and Anonymous: 420

• Self-reference Source: https://en.wikipedia.org/wiki/Self-reference?oldid=675256110 Contributors: WojPob, Mav, The Anome, Man-ning Bartlett, Ed Poor, Enchanter, Rootbeer, Anthere, Mintguy, Hephaestos, Patrick, Michael Hardy, Tim Starling, Oliver Pereira, Jah-sonic, MartinHarper, Matthewmayer, GTBacchus, Karada, Paddu, Tregoweth, Ahoerstemeier, EntmootsOfTrolls, Slovakia, Error, TimRetout, IMSoP, Palfrey, Harris7, Dysprosia, Furrykef, David Shay, Omegatron, Bevo, ,דוד Khym Chanur, Jerzy, Skrim, Ldo, Jeffq,Sjorford, Fredrik, PBS, Chocolateboy, Goethean, Rfc1394, Geogre, Saforrest, Wereon, Anthony, Diberri, Matthew Stannard, Alerante,Giftlite, Dbenbenn, DocWatson42, DavidCary, Jao, Haeleth, Orangemike, Theon~enwiki, Wikiwikifast, Matt Crypto, HorsePunchKid,Joeblakesley, Zondor, Grunt, RevRagnarok, AlexChurchill, Freakofnurture, DanielCristofani, Jim Henry, Discospinster, Max Terry,Xezbeth, TheJames, Paul August, MattTM, Danakil, Brian0918, PhilHibbs, JRM, Jonathan Drain, Grick, Excalibre, Richi, Stjef, Ni-hil~enwiki, Sp00n17, Massar, Naif, Msauve, ShawnVW, H2g2bob, DV8 2XL, LukeSurl, Ceyockey, Dtobias, Zntrip, Mjpotter, Angr,Mindmatrix, BillC, Jacobolus, Kosher Fan, Pufferfish101, Dionyziz, Macaddct1984, MushroomCloud, Marudubshinki, Mandarax, Gra-ham87, Keeves, Qwertyus, Rjwilmsi, KYPark, Edbrims, Dpark, Cakedamber, Baryonic Being, Titoxd, SchuminWeb, Jax-wp, Gurch,Mstroeck, King of Hearts, Ahpook, Wavelength, Ilanpi, Hyad, Sasuke Sarutobi, Kyorosuke, Thane, Deskana, TheLH, JDoorjam, Yano,William R. Buckley, Alex43223, Noam~enwiki, Wknight94, Alecmconroy, 21655, Johndburger, Closedmouth, Warreed, Garion96,Profero, Jinzo7272, NetRolller 3D, SmackBot, Moeron, McGeddon, Piroteknix, Nerd42, Jwestbrook, Dyslexic agnostic, Oneismany,Chris the speller, MartinPoulter, Nbarth, Kindall, JonHarder, Matchups, AnthonyMartin, Gaddy1975, Wes!, Khoikhoi, Igor the Lion,Chris3145, Luigi III, TenPoundHammer, BrownHairedGirl, Tktktk, Steipe, Physis, 041744, 16@r, Loadmaster, Shimmera, Jeiki Re-birth, Dicklyon, Mikem1234, EdC~enwiki, Cbuckley, Mikekearn, Kencf0618, Impy4ever, Lesion, Shultz III, Buddy13, Yashgaroth,Peter1c, Mapsax, CRGreathouse, CmdrObot, Outriggr, Seven of Nine, Bmk, Chantessy, Gregbard, Cricketseven, Dragon’s Blood, Cam-brant, Danman3459, Peterdjones, Gogo Dodo, Inkington, Eubulide, Thijs!bot, Oryanw~enwiki, Timo3, Al Lemos, TheTruthiness, Kam-Stak23, Nycdi, Rps, AnAj, Oddity-, JAnDbot, Smiddle, Tiberone, Trey314159, Chevellefan11, Ophion, Hullaballoo Wolfowitz, Tkang,Chemical Engineer, Esanchez7587, Logicbox, Gwern, B9 hummingbird hovering, Anarchia, Smokizzy, Anandcv, AstroHurricane001,Numbo3, Maurice Carbonaro, Chiswick Chap, Ritarius, Vanished user 47736712, Sbaxt641, Atheuz, Botx, Zephyrus11, Dorftrottel,VolkovBot, MasterPeanut, Maghnus, Uagehry456, Jamelan, Jesin, VanishedUserABC, Cjc13, Mblub, CortexSurfer, SieBot, YonaBot,Paradoctor, Ernie shoemaker, Toddst1, JLKrause, Svick, Anchor Link Bot, JL-Bot, JosefAssad, Iceberg1031, MarkSMann, De728631,DionysosProteus, Arakunem, Wildroot, P.T.isfirst, Trivialist, John J. Bulten, Rabbitslayer21, Jumbolino, Three-quarter-ten, PixelBot,Sun Creator, JasonAQuest, Kakofonous, Tezero, Alevy1234, XLinkBot, Koreindian, Rror, Slogan120, Tayste, Addbot, Otterathome,MrOllie, AgadaUrbanit, Tassedethe, Emdrgreg, Lightbot, Jarble, Luckas-bot, Yobot, II MusLiM HyBRiD II, AnomieBOT, 1exec1, Tre-vithj, Galoubet, Eumolpo, ArthurBot, GrouchoBot, Titi2~enwiki, G3548dm, Berylcloud, Worldrimroamer, Ace9999, Asafox, Boxplot,Pinethicket, RedBot, Foobarnix, Fosforos, Dungeonscaper, Hobbes Goodyear, Beyond My Ken, WikitanvirBot, Singularity2, ClueBotNG, Redyka94, JohnsonL623, MaximalIdeal, Helpful Pixie Bot, Curb Chain, Bouket, Lifeformnoho, HelloAndroid, Mrrykler, Lyrewyn,Maximuspryme, Meteor sandwich yum, VictorLucas, Parabolooidal, Ice ax1940ice pick, Chocolatechip65, 74-tungsten, KasparBot andAnonymous: 243

• Self-referential encoding Source: https://en.wikipedia.org/wiki/Self-referential_encoding?oldid=657569059Contributors: Kku, CharlesMatthews, Greenrd, Chris 73, Jfdwolff, Utcursch, MBisanz, Giraffedata, Hu, BD2412, PaulWicks, Bluebot, Hoof Hearted, RomanSpa,CmdrObot, Deon, Outriggr, Cydebot, Mattisse, JustAGal, Gökhan, Xemxija, Noggin the Nog, Frans Botha, Theo 73, Pietro Longhi,Emeraude, Xnuala, ClueBot, Yobot, Namazukage7, LilHelpa, John of Reading, BG19bot, Frze, ChrisGualtieri, Gseidman, Ruby Mur-ray, Jamesmcmahon0, Mtierney01, BMcInerney8792 and Anonymous: 7

• Self-referential humor Source: https://en.wikipedia.org/wiki/Self-referential_humor?oldid=675833785 Contributors: Damian Yerrick,Bryan Derksen, The Anome, D, Tregoweth, Omegatron, Ashdurbat, Wereon, Jleedev, Icairns, Joyous!, Discospinster, Florian Blaschke,Demitsu, MDCore, Mixcoatl, PaulHanson, SilentGuy, Vary, Miserlou, Jeff Bowman, Metropolitan90, Deskana, Saizai, Chrismith, Smack-Bot, Matveims, Derek R Bullamore, TheHYPO, Rimsy, Wolfdog, Outriggr, Cydebot, Reywas92, Rspeed, Yoe, Kangaru99, JamestownJames, Cliché guevara, R'n'B, Trumpet marietta 45750, Hersfold, Mattgirling, Laudak, Sun Creator, Mrieck, Scientus, Dayofswords,Yobot, Materialscientist, Pigby, Some standardized rigour, Erik9bot, FrescoBot, Tinton5, PuppyOnTheRadio, WCityMike (Usurped), Ὁοἶστρος, Josephshanak, Zhaofeng Li and Anonymous: 48

• Tupper’s self-referential formula Source: https://en.wikipedia.org/wiki/Tupper’{}s_self-referential_formula?oldid=669622738 Con-tributors: AxelBoldt, Eliasen, Dysprosia, Alan Liefting, Giftlite, Daniel Medina, Anders Kaseorg, Reinoutr, Joe Decker, Koavf, Miser-lou, Dcastor, Quuxplusone, Mstroeck, Deskana, Gadget850, SmackBot, Cachedio, Thumperward, Jayanta Sen, Ignirtoq, Wafulz, OlafDavis, Agentilini, Coredumperror, Ariel., Bcartolo, Joshua Issac, Flyingtoaster1337, Mathyogi, Orlin238, Doctorfluffy, Rumping, Dekart,Ctourneur, Addbot, Captain-tucker, MrVanBot, Luckas-bot, AnomieBOT, Donko XI, HRoestBot, Tedlschroeder, EmausBot, ZéroBot,Mikhail Ryazanov, ClueBot NG, Franklin Yu, Wer900, Kephir, Zane zane and Anonymous: 31

• Universal set Source: https://en.wikipedia.org/wiki/Universal_set?oldid=674683081 Contributors: Awaterl, Patrick, Charles Matthews,Dysprosia, Hyacinth, Paul August, Jumbuck, Gary, Salix alba, Chobot, Hairy Dude, SmackBot, IncnisMrsi, FlashSheridan, Gilliam, Lam-biam, AndriusKulikauskas, Newone, CBM, User6985, Cydebot, LookingGlass, David Eppstein, Ttwo, VolkovBot, Anonymous Dissident,SieBot, ToePeu.bot, Oxymoron83, Cliff, Addbot, Neodop, Download, Dimitris, Yobot, Shlakoblock, Citation bot, Xqbot, Amaury, Fres-coBot, Aikidesigns, Petrb, Wcherowi, Jochen Burghardt, Vivianthayil, Smortypi, Blackbombchu, TerryAlex, Arian DM and Anonymous:24

18.6.2 Images• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-

main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)• File:Circular_Reference.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Circular_Reference.svgLicense: CCBY-

SA 3.0 Contributors: Own work Original artist: Nicolas1981• File:Droste.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/62/Droste.jpg License: Public domain Contributors: [4] [5]Original artist: Jan (Johannes) Musset?

Page 77: Self Reference

72 CHAPTER 18. UNIVERSAL SET

• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: TheTango! Desktop Project. Original artist:The people from the Tango! project. And according to themeta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (althoughminimally).”

• File:Fractal_fern_explained.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Fractal_fern_explained.pngLicense:Public domain Contributors: Own work Original artist: António Miguel de Campos

• File:Hofstadter2002.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Hofstadter2002.jpg License: CCBY2.5Con-tributors: Own work Original artist: Maurizio Codogno, it:Utente:.mau.

• File:LampFlowchart.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/LampFlowchart.svg License: CC-BY-SA-3.0 Contributors: vector version of Image:LampFlowchart.png Original artist: svg by Booyabazooka

• File:Nasir_al-Din_Tusi.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Nasir_al-Din_Tusi.jpgLicense: Public do-main Contributors: scan of stamp 30 May 2006 Original artist: ?

• File:Ouroboros.png Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Ouroboros.png License: Public domain Contribu-tors: ? Original artist: ?

• File:P_literature.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1d/P_literature.svg License: CC-BY-SA-3.0 Con-tributors: ? Original artist: ?

• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0Contributors:Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

• File:Serpiente_alquimica.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Serpiente_alquimica.jpg License: Pub-lic domain Contributors: cf. scan of entire page here. Original artist: anonymous medieval illuminator; uploader Carlos adanero

• File:Sierpinski_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Sierpinski_triangle.svg License: CC BY-SA 3.0 Contributors: ? Original artist: ?

• File:Sourdough.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0a/Sourdough.jpg License: CC BY 4.0 Contributors:Own work Original artist: Janus Sandsgaard

• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svgfrom the Tango project. Original artist: Benjamin D. Esham (bdesham)

• File:Tupper’{}s_self_referential_formula_plot.png Source: https://upload.wikimedia.org/wikipedia/commons/8/88/Tupper%27s_self_referential_formula_plot.png License: Public domain Contributors: ? Original artist: ?

• File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub-lic domain Contributors: Own work Original artist: Cepheus

• File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:CC-BY-SA-3.0 Contributors:

• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public

domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),based on original logo tossed together by Brion Vibber

18.6.3 Content license• Creative Commons Attribution-Share Alike 3.0