signal detection theory basics

19
Signal Detection Theory Basics by John Michael Williams [email protected] 2012-08-02 An elementary presentation of signal detection theory and of the modulation transfer function (MTF). Keywords: signal detection, signal detection theory, psychophysics, likelihood ratio, forced-choice experiment, experimental psychology, receiver operating characteristic, ROC curve, jnd, sensitivity, criterion, isosensitivity curve, Poisson , absolute threshold, normality assumption, modulation transfer function, MTF. Copyright (c) 2012 by John Michael Williams. All rights reserved.

Upload: john-michael-williams

Post on 18-Apr-2015

178 views

Category:

Documents


1 download

DESCRIPTION

An elementary presentation of signal detection theory and of the modulation transfer function (MTF).

TRANSCRIPT

Page 1: Signal Detection Theory Basics

Signal Detection Theory Basics

by John Michael Williams

[email protected]

2012-08-02

An elementary presentation of signal detection theory and of the modulation transfer function (MTF).

Keywords: signal detection, signal detection theory, psychophysics, likelihood ratio, forced-choice experiment, experimental psychology, receiver operating

characteristic, ROC curve, jnd, sensitivity, criterion, isosensitivity curve, Poisson , absolute threshold, normality assumption, modulation transfer

function, MTF.

Copyright (c) 2012 by John Michael Williams.All rights reserved.

cjohn
Text Box
cjohn
Typewriter
Page 2: Signal Detection Theory Basics

Signal Detection Terminology1

In classical psychophysics, a threshold is determined by finding the lowest stimulus strength at which the subject's (S's) response experiences a change as a function of the stimulus applied. In signal-related work, the stimulus, of course, is a signal. The change in response to a signal at threshold is referred to as a just-noticeable-difference (jnd).

For a sufficiently weak stimulus, the response by S will be the same as that for no stimulus at all; for a sufficiently strong stimulus, S ideally always will emit at least a jnd response. In any case, a response to noise (irrelevant sensory input) always is considered to be possible; noise always provides a background against which the true stimulus, a signal, may be present. Thus, presentation of the stimulus is treated as equivalent to presentation of a signal-plus-noise. When no signal is being emitted, all stimulation may be assumed to be because of noise alone.

In signal detection theory, the classical approach above is elaborated by considering the jnd to represent an intervening variable called sensitivity. The problem to which signal detection theory is addressed, then, is that of separating the effects of sensitivity from those of a different intervening variable, called criterion. Intuitively, criterion may be identified with S's tendencies (or, likelihood) to emit a jnd. S is viewed as an organism which is capable of inhibiting a jnd independent of signal strength; if the tendency to inhibit is high, criterion is said to be high.

Signal detection theory, as of the present time, most typically refers to two-alternative, forced-choice (2AFC) experimental tasks. This means that one of only two possible stimulus conditions is present on any given task, and that the experimental subject, S, is counted as either responding (jnd) or not responding on each task. Each new task is called a "trial". In such an experiment, a jnd will not always be emitted in the presence of a stimulus which is near the S's sensory threshold; this is of crucial interest to the experimenter. There are four possible results on any given 2AFC trial:

1. If the response is emitted on a trial in which a stimulus (signal) in fact was present, the jnd may be referred to as a "hit", or "correct detection".

2. If a response is emitted with no signal present, the jnd is referred to as a "false alarm" or "false positive".

3. If no response is emitted but the signal is presented, the trial is referred to as a "miss".

4. If no response is emitted and no signal is presented, the trial is scored as a "correct rejection".

====================1. This paper is based in part on work by the author in 1979 at a seminar presented at Southern Illinous University, Carbondale, under direction of Professor Alfred Lit. The references may require library access. However, the subject matter is current, because the science, excepting minor terminology and style differences, has remained unchanged over the past thirty or more years.

Page 3: Signal Detection Theory Basics

Criterion is considered to be a variable which is subject to experimental manipulation; it especially is considered to be a function of the instructions given to the experimental subject, and of the immediate past-history of any conditioning (training) given previous to, or during, the experiment (see Galanter, 1962, pp. 94 - 114). By contrast, sensitivity usually is not considered to vary (except parametrically) and may be treated as an inherent, relatively stable characteristic of the sensory system(s) and central nervous system of the organism under study. An underlying simile here is that the organism resembles a radio receiver and, like such a receiver, may be viewed as having a Receiver Operating Characteristic (ROC) curve.

Figure 1: Organization of a typical ROC curve. Pd = probability of detection; PFA = probability of false

alarm.

An ROC is a curve typically plotted against a 45o line which has been drawn ascenting from left to right inside a square region from (0,0) to (1,1). An example is sketched in Figure 1. The ROC measures any reception by bulging upward and to the left; the larger this bulge, the more salient the receiving signal. Such a curve would depend in part upon a parameter conventionally represented as d', which parameter gives the constant underlying sensitivity of that organism regardless either of criterion or of the particular stimulus being presented on an experimental trial.

Relevant references to the standard works on signal detection theory may be found in Luce (1963).

Page 4: Signal Detection Theory Basics

Likelihoods and Criteria

Let s represent the signal and n the noise. Following Luce (1963), then, stimuli may be considered to have certain effects x, x = (x1, x2, . . . , xk) being a vector in Rk , such that subjective probabilities psubj [anything /(s + n)] and psubj [anything /n ] are estimated by S and control the response on each trial in the given experimental situation. When event x occurs, S is inferred to have based a decision upon the likelihood ratio

L(x ) =P subj [x /(s + n)]

P subj[x /n ]. (1)

If the individual components of x are considered separately, of course, psubj [x /(s+n)] may be defined as the joint probability psubj [x1/(s+n)]⋅psubj[x1/(s+n)]⋅. . .⋅psubj[xk/(s+n)] , which may be written more compactly as

psubj [x /(s+n)] = ∏i=1

k

psubj[xi /(s+n)] (2)

In this analysis, if L(x ) in (1) above is large, S is assumed to have decided that(s+n) has been presented and acts accordingly. If L(x ) is small, S is assumed to have

decided that n has occurred. The decision is assumed to have been based on a constant-criterion value of L(x ), which, if exceeded, will result in a jnd.

As mentioned in Clark (1978), a criterion may be treated (a) as a fixed value oflog [L(x )] , as above, or (b) as a sort of "intensity" C (x) of sensory experience, in

subjective or objective standard-deviation units. An objective test of the difference between Clark's two alternatives is not simple: It is possible to generate an ROC curve by varying the experimental conditions in such a way that S changes criterion under known conditions. Because the ROC curve is monotonic, L(x ) defined as a ratio of probability densities (ordinates) is identical to C (x) defined, for fixed d', as a point on an abscissa.

The technicalities here have been somewhat simplified; the reader should seek further information and quantitative details in Luce (1963), and in Galanter (1962, esp. p. 102).

The ROC and likelihood. Once the ROC curve has been determined as above for a given experiment, likelihood ratios for any given criterion, hit rate, or false-alarm rate may be calculated. These likelihood ratios are just slopes of the ROC curve (see Clark, 1978).

The ROC curve also may be estimated by computing likelihood ratios at various criterion values. Such ratios give the slope of the ROC curve at the coordinates of the hits or false alarms determined by each distinct criterion value chosen. Line segments drawn to represent the likelihood ratios, if numerous enough, will bound from above a specific area approximately equal to that bounded by the ROC curve being estimated. Furthermore, given various assumptions as to randomness and the distributions of response errors, this same area may be used, in addition, to derive a value for d'.

Page 5: Signal Detection Theory Basics

Normality assumptions. In a 2AFC design, the signal detection theory depends upon the assumption that both the signal + noise and the noise alone are producing sensory effects random in magnitude but different in probability distribution. Most often, s + n and n, in their effects, are assumed to be normally distributed and of equal variance. These assumptions make d', the sensitivity measure of the theory, equal to the value of Student's t statistic, should a significance test be performed between the s + n and n mean responses.

It should be mentioned that the ideas of "signal" and "noise" derive from communications circumstances in which a genuine signal (or code) is sent from one human being to another; under such circumstances, the transmitted and received forms of a message, as reported or acted upon, can be compared directly for accuracy. By contrast, in empirical research such as Clark's 1978 study of pain, the stimulus applied may be known, but the response can be measured only as finally emitted. Under these conditions, whether one part of S's body may be considered as "signalling" some other part in any meaningful way becomes something of a question. This question usually, however, may be treated as one of pragmatic importance, only: If the approach works, then it is tenable -- even if it implies underlying factors not amenable to further study.

Multiple Alternatives

Signal detection theory is not limited to 2AFC designs. For example, the likelihood of any particular category in an n-alternative FC (nAFC) design may be computed under a variety of criterion conditions, thus allowing an ROC curve to be plotted just for each Yes allowed; the remaining stimulus alternatives in each case may be lumped together as "noise".

Furthermore, there is no good reason why any two Yesses might not be considered a single Yes in the context of the combined signal-plus-noise of the two, the remaining alternatives again being lumped as "noise". However, because a likelihood ratio by definition is a ratio of two likelihoods, the application of signal detection theory must involve, at some point, a transformation of all likelihoods to a well-ordered set of ordinal values, so that a unique likelihood ratio might be made to correspond to each criterion value (see Luce, 1963, pp. 111 - 113).

Page 6: Signal Detection Theory Basics

Poisson Example

We now proceed to some detailed calculations, a worked example of a problem in signal detection theory, based on a 2AFC problem result given in summary form by Van Trees (1968, pp. 29 - 30, 41 - 44). An ROC curve relevant to the solution may be found in Van Trees (Figure 2.11); here, we merely work some detailed calculations necessary to quantify the solution.

First, we state a few general equations common to all related solutions:

The general Poisson relationship in this context, for Pr representing probability, and E representing expectation, may be written as

Pr(X = n) =(μ i)

n

n !⋅e−μi , n = 0, 1, .. . ; i = 0, 1 , (3)

with E (n) = μ by definition.

For Poisson random variables of identical n, we then may set up expressions for two hypotheses, H0 being the null hupothesis, by writing

H1 : Pr(X = n) =μ1

n

n!⋅e−μ1 , n = 0, 1, 2, .. . ; and, (4)

H0 : Pr(X = n) =μ0

n

n!⋅e−μ0 , n = 0, 1, 2, .. . . (5)

For these hypotheses, a likelihood ratio test may be based upon some decision numberη by writing

H1 : L (n) = (μ1

μ0)n⋅ exp(μ0 − μ1) > η ;

H0 : L (n) = (μ1

μ0)n⋅ exp(μ0 − μ1) < η .

(6)

If we allow μ1 > μ0 , then an equivalence test with value γ may be based upon

H1 : γ =lnη+ μ1 − μ0

lnμ1 − lnμ0

< n ;

H0 : γ =lnη+ μ1 − μ0

lnμ1 − lnμ0

> n .

(7)

Page 7: Signal Detection Theory Basics

Using (4) above, we now may write the probability of correct detection P D as

P D = e−μ1∑γ

∞ μ1n

n != 1 − e−μ1 ∑

n = 0

γ− 1 μ1n

n!, γ= 0, 1, 2, . .. . (8)

Using (5) above, the probability of a false alarm P F then may be written as

P F = e−μ0∑γ

∞ μ0n

n != 1 − e−μ0 ∑

n = 0

γ− 1 μ0n

n!, γ= 0, 1, 2, . .. . (9)

Example 1. If we assume that μ1 = 4 and μ0 = 2 , here is how we may obtain the resulting ROC curve:

Given these μ j and substituting values of γ in equations (8) and (9) above, we obtain

γ PD PF

1 .906 .594

2 .762 .323

3 .567 .143

4 .371 .053

5 .215 .017

6 .1107 .0045

7 .0511 .0011

Table 1: Calculated probabilities of detection (PD) and of false alarm (PF) for the specific μ1

and μ0 assumed for this example.

An ROC for such values is graphed in Van Trees (1968, Figure 2.11); or, an equivalent ROC may be plotted by hand, using the values in Table 1. In this problem, larger values of γ correspond to detection at higher criteria, raising the probability of responding "yes"( =μ1 = P D) .

Page 8: Signal Detection Theory Basics

Example 2. Determine the likelihood ratio L(n) for the condition μ1 = 4 andμ0 = 2 :

Using equation (6) above, we obtain the values shown in Table 2.

This result demonstrates that as the n obtained on a trial increases, so does the likelihood that the sample (= trial) was obtained as E (n) from the distribution formed from μ1 rather than from μ0 .

Example 3. Suppose that the data in Table 1 above were obtained in an experiment for which we can let γ= n . Assuming this, estimate the values of μ1 and μ0 which must have been in force to obtain those values.

We proceed in four distinct steps:

Step 1: We begin by using the preceding Table 1 data to estimate L(n) as defined in equation (6) above. Recalling that L(n) is the slope of the ROC curve, we may use the points in (6) pairwise to calculate differences, thus obtaining equations, at all coordinates less one, for the log ratios of μ1 / μ0 :

ln( .908 − .762.594 − .323 ) ≈ 2⋅ln(μ1

μ0) + μ0 − μ1 (10)

ln( .762 − .567.323 − .143) ≈ 3⋅ln(μ1

μ0) + μ0 − μ1 (11)

n L(n) ln L(n)

1 .271 -1.307

2 .541 -0.614

3 1.083 0.079

4 2.165 0.773

5 4.331 1.466

6 8.661 2.159

7 17.323 2.852

8 34.646 3.545

9 69.292 4.238

10 138.583 4.931

Table 2: Likelihood ratio (see text).

Page 9: Signal Detection Theory Basics

ln( .567 − .371.143 − .053 ) ≈ 4⋅ ln(μ 1

μ 0) + μ 0 − μ 1 (12)

ln( .371 − .215.053 − .017) ≈ 5⋅ln(μ 1

μ 0) + μ 0 − μ 1 (13)

ln( .215 − .1107.017 − .0045 ) ≈ 6⋅ ln(μ 1

μ 0) + μ 0 − μ 1 (14)

ln( .1107 − .0511.0045 − .0011) ≈ 7⋅ln(μ1

μ0) + μ0 − μ1 (15)

Step 2: Next, we evaluate left-hand sides and subtract the six preceding equations pair-wise in order to eliminate terms in ln(μ1 / μ0) :

−1.0078 =12(μ0−μ1)

−0.4934 =14

(μ0−μ1)

−0.4346 =16

(μ0−μ1) .

(16)

At this point, averaging (16) produces the result,

μ1 ≈ μ0 + 2.1988 . (17)

Step 3: We now can substitute result (17) into (10) - (15) above:

−0.6185 = 2 ln(μ0 + 2.1988

μ0) − 2.1988

0.0800 = 3 ln(μ0 + 2.1988

μ0) − 2.1988

0.7783 = 4 ln(μ0 + 2.1988

μ0) − 2.1988

1.4663 = 5 ln(μ0 + 2.1988

μ0) − 2.1988

2.0823 = 6 ln(μ0 + 2.1988

μ0) − 2.1988

2.8639 = 7 ln(μ0 + 2.1988

μ0) − 2.1988 .

(18)

Page 10: Signal Detection Theory Basics

Step 4: Finally, we now can average (18) and solve for μ0 , yielding

μ0 = 1.9911 ≈ 2 ; and, from (17),μ1 = 4.1899 ≈ 4 , completing the solution.

(19)

Page 11: Signal Detection Theory Basics

Signal Detection at Absolute Threshold

The value of d' increases rapidly from zero as signal intensity rises above absolute threshold. For all signal intensities much below absolute threshold, d' uniformly will be equal to zero.

Using the 1942 data of Hecht, Shlaer and Pirenne and the 1954 data of Denton and Pirenne, Barlow (1956) makes the point that a single quantum seems to be enough to excite a retinal rod, whereas more than one quantum per signal seems to be required to explain various effects of excitation area and duration of exposure.

The results of Hecht, et al were that five to eight quanta were required as a reliable lower bound on retinal sensitivity adequate for detection at absolute threshold under their very favorable conditions. In terms of signal detection, this lower bound was one determined by retinal sensitivity and by criterion. If one assume that sensitivity in fact was constant under each of the two cited stimulus conditions, then Barlow's (1977, p. 342) result that the quanta per trial is just γ≈ 6 follows at once. This means that s + n versus n distributions much closer than about 6 quanta/trial would yield few yes (detected) decisions; s + n versus n distributions considerably greater than 6 quanta/trial would yield many or all yes's.

As Barlow (1977, pp. 341 ff.) points out, the Hecht, et al approach does not consider the existence or effect of retinal noise. Barlow estimates the threshold-with-noise to be about three times the size of the noiseless threshold. This suggests that the retinal noise intensity is twice the signal intensity when the probability of correct detection is 0.5.

Stated quantitatively, including retinal noise, this means that the average Hecht, Schlaer, and Pirenne criterion of 6 may be written as

∑γ= n = 1

∞ e−a⋅an

n != 0.5 ; or, equivalently, (20)

∑n = 0

γ− 1 = 5 e−a⋅an

n!= 0.5. (21)

Solving (21), one obtains a ≈ 5.65 for the mean of the putative underlying Poisson distribution of signal + noise events at the retina. On each such trial, then, the average signal intensity will have been 5.65/3 ≈ 1.88 events, whereas the average noise intensity will have been 2(5.65)/3 ≈ 3.77 events.

Page 12: Signal Detection Theory Basics

The formulas given here, and for the worked example above, now may be used to compute the following table which, with Figures 2 and 3 below, describe the sought retinal signal and noise characteristics:

n Pn(n); μ0 = 3.77 Ps+n(n); μ1 = 5.65 L(n) H1

H0

ln [L(n)] PFA PD

0123456789

10

.0231

.0869

.1638

.2059

.1940

.1463

.0919

.0435

.0233

.0098

.0037

.0035

.0199

.0561

.1057

.1438

.1688

.1589

.1283

.0906

.0569

.0321

.1536

.2287

.3427

.5136

.76981.15361.72892.59113.88315.81968.7216

-1.8800-1.4754-1.0708-0.6663-0.26170.14290.54750.95211.35661.76122.1658

-.8900.7262.5204.3263.1800.0881.0386.0153.0055.0018

-.9766.9205.8147.6654.4966.3377.2094.1188.0619.0298

Table III: Estimated probabilities of false alarm and of detection for the equation (21) data.

Figure 2: Poisson distributions based on Barlow (1977) for retinal signal detection. Data from Hecht, et al (1942).

Page 13: Signal Detection Theory Basics

Figure 3: ROC for the Poisson distributions plotted in Figure 2 above. d = detection; FA = false alarm.

Page 14: Signal Detection Theory Basics

Photopigments and Thermal Noise

Barlow (1957) suggests that the major retinal noise source is Planckian thermal radiation from the tissues within the eye itself. The Planckian radiant energy (heat) decomposes the photopigments of the eye in a way indistinguishable from the decomposition caused by incident photons in the range of visible light. Using an approach he attributes to Stiles, Barlow (1957) gives the expression

I λ = λ−1⋅exp −[(hc) / (2λkT )] , (22)

in arbitrary energy units, for the noise intensity at threshold which is determining the threshold for a photopigment with peak absorption at wavelength λ . In this formula, T represents the temperature of the eye in degrees Kelvin (about 310 K), and k is Boltzmann's constant..

Granting Barlow's assumptions, the Hecht, Schlaer, and Pirenne data mentioned above suggest a retinal noise intensity of about 3.77 quantal-equivalent events per trial at a wavelength of 510 nm, which is not far from the 507 nm wavelength at which the human retinal scotopic rod receptors have their maximum sensitivity. Thus, the noise comes to a total, in equivalent 510 nm radiation, of

I 510 =(hc )

λ =(6.63×10−34

)⋅(3×108)⋅(3.77)

510×10−9 = 1.5×10−18 joule/trial . (23)

Barlow's (1957) formula

I 1

I 0

=λ0

λ1

⋅exp[ (hc)(1/λ0 − 1/λ1)

(2k T ) ] (24)

then yields a noise intensity of 48⋅I 510 = about 7×10−17 joule/trial at the photopic peak of V λ at 555 nm, this being a wavelength about equal to that of greatest human photopic visual sensitivity. Thus, the optimum absolute threshold for photopic cone receptor vision predicted at 555 nm would be approximately

I signal =Inoise

2=

72×10−17

= n E = n h c/555 nm . (25)

Solving for n, we obtain n = about 100 quanta at 555 nm.

This approach would put photopic threshold at about 1.4 log10 units above scotopic threshold, in energy units.

Examining our result, data of Hecht and Hsia (1945) republished in Graham (1965, Figure 4.6) suggests that under some conditions, at least, the 507/555 nm photochromatic difference is found to be about 1.55 log10 units, in fairly good agreement with the above computations.

Our above line of reasoning suggests that the central contribution to absolute threshold can be expected to be small, on the order of 0.2 log10 units. However, a negligible central contribution was assumed by Barlow in the first place . . ..

Page 15: Signal Detection Theory Basics

More on the MTF and Noise

As shown by the Francon (1963, ch. 7) derivation for the image of a point source, an object-point on the optic axis (x , y) will will yield an image amplitude U at an off-axis location (x ' , y ' ) given by

U (x ' , y ' ) = ∬W

exp [− jK (α' x ' + β' y ' )] dα' dβ' . (26)

For these first four paragraphs, we shall use z to represent the distance between the wave surface and the image plane; consistent with this, we shall use the two orthogonal planes (x , y) and (α, β) to represent the image plane and wave surface, respectively. If the amplitude and phase on the wave-surface W differs from unity by a factor

g(α' , β' ) , then U (x ' , y ' ) is given by the fourier transform of g, which makes U and g fourier transform pairs related as follows:

U (x ' , y ' ) = ∬W

g (α' , β' )⋅exp [− jK (α' x ' + β' y ' )] dα' dβ' (27)

g(α' , β' ) = ∬I

U (x ' , y ' )⋅exp [− jK (α' x ' + β' y ' )] dα' dβ' (28)

Here, I is the amplitude and phase of the corresponding center of the image. In section 7.2 of his 1963 paper, Francon, using slightly different (x , y , z) coordinates, shows that these equations may be used to prove, under quite general conditions, that the fourier transform of the image of an incoherent object equals the product of the fourier transform of that object with the fourier transform of the image of an isolated point such as the on-axis point introduced to obtain (26) above. In these representations, the object and image are described in terms of luminous fluxes or intensities, because the vector addition of amplitudes is replaced by a scalar addition in all small local regions of the image and object.

Thus, in effect, an optical system consisting, for simplicity, of a single simple lens, performs two fourier transforms in succession in order to process an image: First, the object amplitude o(x , y) is transformed to g(α' ,β' ) at the lens; then, the latter is transformed to U (x ' , y ' ) in the image plane.

Finally, if we can assume a linear system (as in paraxial optics), the Modulation Transfer Function (MTF) of a system is given by the ratio of the fourier transform of the output amplitude to the fourier transform of the input amplitude:

MTF (s , t) =GU (s , t)

F O(s , t), (29)

in which F is the transform of the object and G is the transform of the image as explained just above.

The MTF in general is a complex function of the transform variables (spatial frequencies), the real part giving the amplitude attenuation and the imaginary part giving the phase shift.

Page 16: Signal Detection Theory Basics

The MTF and signal detection. To show how the preceding derivations can be used in an actual application, suppose that we have a received signal r (y) such that

r (y ' ) = m(y ' ) + n( y' ) , (30)

in which m is the signal, n is random noise, and y represents spatial location. In such a case, the MTF may be used to calculate the power spectrum of the signal and the noise.

To begin, let us consider only one spatial dimension of the objects and images concerned. The one-dimensional fourier transform pairs corresponding to (27) and (28) above therefore may be written as

U (y ' ) = ∫{σ}

g(β' )⋅exp [− jK β' y ' ] dβ' , (31)

in which σ is a segment of a great circle on W passing through the center of the lens; and,

g(β' ) = ∫{ I }

U (y ' )⋅exp [− jK β' y ' ] dy ' , (32)

in which I is a line passing through the center of the image. These two transforms are relevant in that, in general, a power spectrum may be obtained for problems such as ours by computing the fourier transform of the waveform autocorrelation function.

To show an actual calculation, suppose that we have a system MTF as shown in Figure 4 immediately below; this is not unlike the MTF one would expect for a typical simple lens.

Figure 4: Simulated real part of our system MTF, withMTF (s) = |G(s)| / | F (s)| .

Returning to our problem of (30) above, let us suppose further that the source spatial amplitudes are given by

m(y) = cos( y) + (1/3)cos (3 y) , and, (33)

n(y) = 1/2 , for all y. (34)

Page 17: Signal Detection Theory Basics

The total signal power then would be

∫[m(y)]2 dy = 1 + (13) ∫

0

π / 6

cos2(3 y) dy

= 1 at s = 1= 1/9 at s = 3.

(35)

The received power would be

∫s

[ MTF (s)⋅√ power(s) ] 2 ds ; (36)

or, using Figure 4 above, we have, for m, about

(.8)2⋅(1) + (.5)

2⋅(1/9) = .67 ; (37)

and, for n, about

[ (.1)2⋅(90) + (.5)

2⋅(10) ]⋅(1/4) = .85 . (38)

MTF selective filtering. If we assume that the receiver input is designed to pass spatial frequencies s = 1 and s = 3 and to exclude all others, we obtain an MTF more or less as shown in Figure 5:

Figure 5: Sketch of MFT for a selectively-filtered receiver which accepts only spatial frequencies of s = 1 and s = 3.

Given this MTF, we very simply can continue the calculation to estimate the noise power; we arive at a result of

(.8)2⋅(1/2) + (.5)

2⋅(1 /2) = about .45 , (39)

and the received signal power again will be about .67, as in (37) above.

Page 18: Signal Detection Theory Basics

Van Trees (1968, sect. 4.2) provides a statistical test for examples such as ours and yields

d̂ ' = (2⋅signal power / noise power)1/2 (40)

d̂ ' = (2⋅(.67 / .45))1 /2

= about 1.73 , (41)

from which an ROC may be plotted to a fairly good approximation.

The unfiltered case in (37) and (38) above yields a d̂ ' of only about 1.26.

This concludes our presentation on these topics.

Page 19: Signal Detection Theory Basics

References

Barlow, H. B. Retinal noise and absolute threshold. Journal of the Optical Society of America, 1956, 46, 634 - 639.

Barlow, H. B. Retinal and Central Factors in Human Vision Limited by Noise. In H. B. Barlow and P. Fatt (Eds.): Vertebrate Photoreception. New York: Academic Press, 1977.

Clark, W. C. Signal detection theory and pain. In F. W. L. Kerr and K. L. Casey (Eds.), Pain (= NRPB, 1978, vol. 16(1), 14 - 27).

Denton, E. J. and Pirenne, M. H. The absolute sensitivity and functional stability of the human eye. Journal of Physiology, 1954, 123, 417 - 442.

Francon, M. Modern Applications of Physical Optics. New York: John Wiley and Sons, 1963.

Galanter, E. Contemporary psychophysics. In T. M. Newcomp (Fwd.), New Directions in Psychology I. New York: Holt, Rinehart and Winston, 1962, pp. 94 - 114.

Graham, C. Some Fundamental Data. Ch. 4 of Vision and Visual Perception, C. H Graham (Ed.). New York: John Wiley and Sons, 1965. The referenced Figure 4.6 of this paper was reproduced from Hecht, S. and Hsia, Y., Dark adaptation following light adaptation to red and white lights: Journal of the Optical Society of America, 1945, 35, 261 - 267.

Hecht, S., Shlaer, S. and Pirenne, M. H. Energy, quanta and vision. Journal of General Physiology, 1942, 25, 819 - 840.

Luce, R. D. Detection and Recognition. Chapter 3, pp. 103 - 189, in R. D. Luce, R. R. Bush and E. Galanter, Handbook of Mathematical Psychology, Vol. 1. New York: John Wiley and Sons, 1963.

Van Trees, H. L. Detection, estimation and modulation theory. New York: John Wiley and Sons, 1968.