an automated diagnostic tool for predicting anatomical response to

185
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2009 An Automated Diagnostic Tool for Predicting Anatomical Response to Radiation erapy Carley Elizabeth Harris University of Tennessee - Knoxville is Dissertation is brought to you for free and open access by the Graduate School at Trace: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of Trace: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. Recommended Citation Harris, Carley Elizabeth, "An Automated Diagnostic Tool for Predicting Anatomical Response to Radiation erapy. " PhD diss., University of Tennessee, 2009. hps://trace.tennessee.edu/utk_graddiss/606

Upload: others

Post on 03-Feb-2022

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Automated Diagnostic Tool for Predicting Anatomical Response to

University of Tennessee, KnoxvilleTrace: Tennessee Research and CreativeExchange

Doctoral Dissertations Graduate School

12-2009

An Automated Diagnostic Tool for PredictingAnatomical Response to Radiation TherapyCarley Elizabeth HarrisUniversity of Tennessee - Knoxville

This Dissertation is brought to you for free and open access by the Graduate School at Trace: Tennessee Research and Creative Exchange. It has beenaccepted for inclusion in Doctoral Dissertations by an authorized administrator of Trace: Tennessee Research and Creative Exchange. For moreinformation, please contact [email protected].

Recommended CitationHarris, Carley Elizabeth, "An Automated Diagnostic Tool for Predicting Anatomical Response to Radiation Therapy. " PhD diss.,University of Tennessee, 2009.https://trace.tennessee.edu/utk_graddiss/606

Page 2: An Automated Diagnostic Tool for Predicting Anatomical Response to

To the Graduate Council:

I am submitting herewith a dissertation written by Carley Elizabeth Harris entitled "An AutomatedDiagnostic Tool for Predicting Anatomical Response to Radiation Therapy." I have examined the finalelectronic copy of this dissertation for form and content and recommend that it be accepted in partialfulfillment of the requirements for the degree of Doctor of Philosophy, with a major in NuclearEngineering.

J. Wesley Hines, Major Professor

We have read this dissertation and recommend its acceptance:

Lawrence Townsend, Chris Pionke, Chester Ramsey, Laurence F. Miller

Accepted for the Council:Carolyn R. Hodges

Vice Provost and Dean of the Graduate School

(Original signatures are on file with official student records.)

Page 3: An Automated Diagnostic Tool for Predicting Anatomical Response to

To the Graduate Council: I am submitting herewith a dissertation written by Carley Elizabeth Harris entitled “An Automated Diagnostic Tool for Predicting Anatomical Response to Radiation Therapy.” I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Nuclear Engineering. J. Wesley Hines, Major Professor

We have read this dissertation and recommend its acceptance: Lawrence Townsend Chris Pionke Chester Ramsey Laurence F. Miller

Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of The Graduate School

Page 4: An Automated Diagnostic Tool for Predicting Anatomical Response to

AN AUTOMATED DIAGNOSTIC TOOL FOR PREDICTING

ANATOMICAL RESPONSE TO RADIATION THERAPY

A Dissertation

Presented for the

Doctor of Philosophy

Degree

The University of Tennessee, Knoxville

Carley Elizabeth Harris

December 2009

Page 5: An Automated Diagnostic Tool for Predicting Anatomical Response to

ii

ACKNOWLEDGEMENTS

I would like to acknowledge and thank those who helped me during my years of study in

Nuclear Engineering. I would first like to thank Dr. J. Wesley Hines for serving as my major

professor. His knowledge as a professor and guidance as a mentor have been greatly valued and I

am honored that I had the opportunity to finish my graduate study under his supervision. I also

would like to thank the other members of my committee, Dr. Chester Ramsey, Dr. Lawrence

Townsend, Dr. Christopher Pionke, and Dr. Laurence F. Miller, who have all provided me with

both personal and academic guidance through my years at the University of Tennessee,

Knoxville. They all are excellent professors and I am appreciative of their willingness to serve on

my committee.

I would also like to give a special acknowledgment to Dr. Alexander Usynin, who created

the tool that was used to collect all of my data. Also, I would like to thank him for all of the time

that we spent discussing my project and for sharing his useful knowledge about prediction

modeling.

I want to thank all of the professors and staff at the University of Tennessee who

continued to give me support through my years as a student. A special thank you goes to my

family in Estabrook, where Janet Coward, Dr. Roger Parsons, and Dr. Christopher Pionke greatly

contributed to both my personal and academic growth and allowed me the opportunity to teach

freshmen engineering students for four years.

Page 6: An Automated Diagnostic Tool for Predicting Anatomical Response to

iii

I would like to thank Dr. H. Lee Dodds for helping me find my way into the Nuclear

Engineering Department and for all of the encouraging words and guidance that I received from

him during my graduate years.

Finally, I want to thank my all of my family and friends for their continued

encouragement and support.

Page 7: An Automated Diagnostic Tool for Predicting Anatomical Response to

iv

ABSTRACT

A "Clinical Decision Support System" (CDSS) is a concept which has advanced rapidly

in health care over the last few decades, and it is defined as "an interactive computer program

that is designed to assist physicians and other health professionals with decision-making tasks."

Radiation therapy oncologists are required to make decisions that do not involve making a

disease diagnosis. Therefore this work focuses on developing a modified CDSS which can be

used to aid oncology staff in identifying cancer patients who will require adaptive radiation

therapy (ART).

An image-guided radiation therapy (IGRT) tool was developed that consists of both

diagnostic and prognostic processes. Patients who will require ART are those whose cross-

sectional neck measurements change by more than half of a centimeter over the entire course of

treatment. First, the tool allows one to “diagnose” or identify which patients would benefit from

adaptive therapy and then make a “prognosis,” or identify when ART is required.

Thirty head and neck (H&N) patients were used in this study, and 15 required ART.

Each diagnosis was made by predicting if the threshold of 0.5 cm would be crossed for each of

the four cross-sectional measurements, and each prediction was made by determining when the

threshold would cross. The diagnosis results show that half (61/120) of the measurements

predicted that patients would need ART given the first 15 observations and 28 of 120 predicted

needing ART within 20 observations. Therefore, 74% of patients' measurements accurately

diagnosed that ART would be required given just the first 20 observations. The prediction results

indicate that an average of 11 observations is needed to make adequate time predictions with a

Page 8: An Automated Diagnostic Tool for Predicting Anatomical Response to

v

reliability of at least 0.5. However, more accurate time predictions with higher reliability values

(0.6 and 0.7) could be made given an average of 16 and 18 observations, respectively. These

predictions, while requiring more observations, provided additional lead time in knowing when

ART is required.

Page 9: An Automated Diagnostic Tool for Predicting Anatomical Response to

vi

TABLE OF CONTENTS

1 INTRODUCTION.......................................................................................................................1

1.1 BACKGROUND ...........................................................................................................1

1.2 ORIGINAL CONTRIBUTIONS ...................................................................................9

1.3 DOCUMENT ORGANIZATION ...............................................................................10

2 LITERATURE SURVEY .........................................................................................................11

2.1 INTRODUCTION .......................................................................................................11

2.2 ADVANCES IN RADIATION THERAPY ................................................................13

2.2.1 Portal Imaging .......................................................................................14

2.2.2 Image-Guided Radiation Therapy ..........................................................15

2.3 ADAPTIVE RADIATION THERAPY .......................................................................18

2.3.1 Identifying Uncertainty and Error ..........................................................20

2.3.2 Anatomical Changes ...............................................................................21

2.4 IGRT FOR ART IMPLEMENTATION ......................................................................24

2.5 CLOSING REMARKS ................................................................................................25

3 METHODOLOGY ...................................................................................................................27

3.1 DESCRIPTION OF THE DATA .................................................................................28

3.1.1 Data Collection .......................................................................................29

3.1.2 Reduction of Dimensionality ...................................................................34

3.2 DESCRIPTION OF THE DIAGNOSTIC PROCESS .................................................38

3.2.1 Tools for Diagnosis .................................................................................39

Page 10: An Automated Diagnostic Tool for Predicting Anatomical Response to

vii

3.2.2 Measure of Reliability .............................................................................45

3.3 DESCRIPTION OF THE PROGNOSTIC PROCESS ................................................46

3.3.1 Prediction Model Selection .....................................................................47

3.3.2 Reliability Metric ....................................................................................51

4 RESULTS………... ...................................................................................................................58

4.1 DIAGNOSIS ................................................................................................................58

4.1.1 Data Set Selection ...................................................................................58

4.1.2 Diagnosing ART Candidates...................................................................60

4.2 PREDICTION ..............................................................................................................71

4.2.1 Kernel Classification ..............................................................................75

4.2.2 Extrapolation Model ...............................................................................77

5 CONCLUSIONS .......................................................................................................................93

5.1 RECOMMENDATIONS FOR FUTURE WORK ......................................................94

REFERENCES….. .......................................................................................................................96

APPENDICES….. ......................................................................................................................102

VITA………………....................................................................................................................172

Page 11: An Automated Diagnostic Tool for Predicting Anatomical Response to

viii

LIST OF FIGURES

Figure 1 Representation of the need for adaptive radiation therapy

(H&N Patient 14) ................................................................................................................ 4

Figure 2 Patient who required adaptive therapy [Hansen 2006] .....................................................4

Figure 3 Target volume definitions ...............................................................................................12

Figure 4 Rapid anatomy changes for one H&N patient (a) before and

(b) three weeks into radiation therapy [Barker et. al 2004] ...............................................22

Figure 5 Cone-beam CT image showing anatomical response to radiation therapy [Dawson &

Sharpe 2006] ......................................................................................................................23

Figure 6 Patient 7 undergoing anatomical changes ......................................................................30

Figure 7 Cross-sectional measurement definitions .......................................................................32

Figure 8 Data collection tool .........................................................................................................32

Figure 9 Numerical example for calculating a trimmed mean ......................................................36

Figure 10 Trimmed Mean Performed ...........................................................................................37

Figure 11 Illustration of removing outlying data ..........................................................................37

Figure 12 Depiction of measurement “one” .................................................................................42 Figure 13 All measurements changed by 0.5 cm ..........................................................................43

Figure 14 No measurements cross the 0.5 cm threshold ...............................................................44

Figure 15 Illustration of calculating the Closeness to Crossover value ........................................56 Figure 16 Correlation vs. Closeness to Crossover ........................................................................57

Figure 17 Significant decreasing trend for Patient 14 ...................................................................62 Figure 18 Slightly decreasing trend for Patient 24 .......................................................................62

Page 12: An Automated Diagnostic Tool for Predicting Anatomical Response to

ix

Figure 19 Correlation vs. Percent Error for Case #1 .....................................................................73

Figure 20 Correlation vs. Percent Error for Case #2 .....................................................................74

Figure 21 Measurement 1 prediction after 10 observations ..........................................................86 Figure 22 Measurement 1 prediction after 15 observations ..........................................................87 Figure 23 Measurement 2 prediction after 10 observations ..........................................................87 Figure 24 Measurement 2 prediction after 15 observations ..........................................................88 Figure 25 Measurement 3 prediction after 10 observations ..........................................................88 Figure 26 Measurement 3 prediction after 15 observations ..........................................................89 Figure 27 Measurement 4 prediction after 10 observations ..........................................................89 Figure 28 Measurement 4 prediction after 15 observations ..........................................................90 Figure 29 Illustration of anatomical changes for Patient 17 after 25 treatments ..........................92

Page 13: An Automated Diagnostic Tool for Predicting Anatomical Response to

x

LIST OF TABLES

Table 1 Patient demographic information .......................................................................................8

Table 2 Crossover data for Patients 1-10 ......................................................................................65

Table 3 Crossover data for Patients 11-20 ....................................................................................66

Table 4 Crossover data for Patients 21-30 ....................................................................................67

Table 5 Diagnosis of ART candidates...........................................................................................69

Table 6 Kernel Classification prediction results ...........................................................................75

Table 7 Reliability for patients requiring ART .............................................................................78

Table 8 Prediction results for a reliability > 0.5 ............................................................................82 Table 9 Prediction results for a reliability > 0.6 ............................................................................83 Table 10 Prediction results for a reliability > 0.7 ..........................................................................84

Page 14: An Automated Diagnostic Tool for Predicting Anatomical Response to

1

1 INTRODUCTION

The primary goal of radiotherapy is to destroy tumor cells contained in a volume. This

volume is physician-defined and is often times located nearby or in normal, healthy tissue in the

body. There becomes a certain compromise or conflicting goal when treating with radiotherapy,

because it is desired to use enough radiation to treat the pre-defined target volume while still

sparing the function of surrounding normal tissues and structures. In order to find the best

compromise and have a successful outcome of the primary goal, both accurate and reproducible

radiation treatments must be delivered over the entire course of therapy.

External-beam radiation therapy (EBRT) involves treating cancer patients with radiation

which leaves the source (linear accelerator) and enters through the skin. It has been in use for the

last 100 years and in the last eight decades it has proven to be an effective therapeutic means for

controlling and even curing cancer. In the last fifty years, rapid technological advances have

allowed for the progression of exceptional radiation therapy treatment for all types of cancer

patients.

1.1 BACKGROUND AND RESEARCH NEED

Head and neck (H&N) cancer is one type of cancer that can be a challenge to treat. There

are a few options for treatment, including radiation therapy, chemotherapy, surgery, or a

combination of these. When using radiation therapy, H&N cancer can be difficult to treat in

terms of covering the tumor adequately, avoiding normal tissues, and delivering the radiation to a

precise location in the head and neck region. As mentioned before, the goal of radiation therapy

Page 15: An Automated Diagnostic Tool for Predicting Anatomical Response to

2

is to deliver an accurate and reproducible treatment. To be accurate, normal structures, such as

saliva-producing parotid glands, spinal cord, and optic nerves, must be avoided, and

reproducibility is achieved with the use of immobilization devices.

H&N patients are of particular interest in radiation therapy because they are known to

incur large anatomical changes (weight loss, tumor shrinkage) over the course of treatment. If

large enough changes occur, the immobilization device, or facemask, can become ineffective in

holding a patient stationary and treatment plans can become invalid due to the delivery of

inaccurate doses. Once the accuracy and reproducibility of a plan is compromised, there is need

for an adjustment before the completion of a patient’s treatment. Therefore, the two main

concerns for head and neck patients are that the facemask is correctly immobilizing the patient's

head during treatment and that the correct radiation dose is being delivered. If either one of these

is compromised, then there becomes a need for adaptive therapy. One might ask, "How do we

know which patients are incurring large anatomical changes?" or "At what point during the

course of treatment should we be concerned about delivering an incorrect radiation dose due to

an ineffective facemask?" Answering these questions ultimately identifies those head and neck

patients who would "benefit from adaptive radiation therapy," and finding the answers is the

main focus of this dissertation.

The main problem is quantifying “significant anatomical change” and determining which

patients have undergone enough change to require a new facemask and/or a new treatment plan.

While the main focus of this dissertation is to identify those patients who will need a new

facemask, it is still important to note the consequences of not having a new mask, meaning the

incorrect delivery of radiation from an invalid treatment plan. Without adapting a treatment plan,

Page 16: An Automated Diagnostic Tool for Predicting Anatomical Response to

3

it is possible for a patient to experience increased doses to normal tissue while under-dosing the

tumor volume. This is extremely undesirable due to a chance of tumor recurrence or toxicity to

the patient.

Figures 1 and 2 illustrate exactly why there is a need for adaptive radiation therapy. The

first patient experienced such significant tumor response to treatment that the mask no longer

touched the sides of the face, which should have required a new mask to be made. However, no

new mask was made and the reason for this is unknown. The second patient experienced changes

that caused the doses to shift, risking over-irradiating the spinal cord, and the middle image

shows what would have been delivered if no adaptive therapy had taken place. However, a new

treatment plan was developed and the last image shows the corrected plan which is no longer

directly delivering dose to the patient's spinal cord.

Many times in a real clinic setting, the oncologist is unaware of significant changes such

as those shown for these patients. Therefore a method should be available for when to implement

adaptive therapy, where patients requiring additional attention could be identified and the

physician could proceed with the appropriate line of action. Methods are currently available for

making decisions regarding a diagnosis, but cancer patients have already been diagnosed and

often require a different type of decision-making during their course of treatment.

Page 17: An Automated Diagnostic Tool for Predicting Anatomical Response to

4

Figure 1: Representation of the need for adaptive radiation therapy (H&N Patient 14)

Figure 2: Patient who required adaptive therapy [Hansen 2006]

Page 18: An Automated Diagnostic Tool for Predicting Anatomical Response to

5

A "Clinical Decision Support System" (CDSS) is a concept which has advanced rapidly

in health care over the last few decades. The idea of having computer-aided diagnostic tools

began back in the 1970's [de Dombal et al. 1972], and a CDSS is defined as "an interactive

computer program that is designed to assist physicians and other health professionals with

decision-making tasks." More recently, a CDSS could be knowledge-based or non-knowledge-

based, and the methodologies used by the systems are already well-known for their problem-

solving abilities. Examples of these methods include Bayesian Networks, Neural Networks,

Genetic Algorithms, Rule-Based Systems, Logical Condition, and Causal Probabilistic

Networks. All methods are useful in the development of a CDSS, and much research has been

done on the feasibility and effectiveness of using the system to make a decision or diagnosis

[Johnston et al. 1994]. While the popularity of these systems is growing for many general

practicing physicians, radiation oncologists are required to make slightly different types of

decisions which do not involve making a disease diagnosis. Therefore, a modified CDSS, such as

the IGRT tool presented in this work, can be used to aid oncology staff in decision-making

processes.

The idea of compensating for changes and customizing treatment plans to individual

cancer patients was first introduced by members of the Department of Radiation Oncology at the

William Beaumont Hospital [Yan et al. 1997a]. Termed adaptive radiation therapy, the idea was

proposed as a closed-loop radiation treatment process where treatment plans could be modified

by using a systematic feedback of measurements. The first use of this process was measuring and

correcting for treatment position variation, with mention of eventually extending it to the

measurement of target geometric variation, or tumor changes.

Page 19: An Automated Diagnostic Tool for Predicting Anatomical Response to

6

Many aspects of radiation therapy have to be monitored and checked during the course of

treatment, including patient positioning, doses delivered to the tumor and surrounding tissue, and

normal/tumor tissue response to radiation. After the concept of ART was introduced, many

studies involving imaging and adaptive therapy followed, attempting to address all of these

aspects. For example, Yan et al. [1998] conducted an ART feasibility study using portal imaging

to evaluate patients for setup error due to incorrect patient positioning, finding that ART was

extremely valuable in routine clinical practice for reducing systematic errors. Hansen et al.

[2006] evaluated the dosimetric effects on target volumes and normal tissues by repeating and

evaluating daily CT scans. Work done by Barker et al. [2004] involved quantifying volumetric

and geometric tumor changes that occurred for H&N patients during courses of radiotherapy.

These three studies have been extensively referenced in other publications whenever adaptive

therapy is mentioned. They all conclude that using available imaging technology along with

ART is extremely valuable and that systematic measurements and data can be useful in the

development of an ART scheme.

True implementation of an ART process will require a new infrastructure where the

clinical procedures such as treatment planning, treatment verification, treatment evaluation, and

treatment adjustment are not performed as independent tasks. This dissertation will concentrate

on the treatment evaluation and adjustment parts of an ART process by developing an off-line

image-guided radiation therapy tool. This tool will allow for H&N patient measurement data to

be evaluated and provide feedback to the radiation oncologists so that any necessary adjustments

can be made. In addition to describing the model used to build the tool, this dissertation will

explain how the tool can be integrated into currently-established IGRT systems so that

Page 20: An Automated Diagnostic Tool for Predicting Anatomical Response to

7

measurements can be automatically made and analyzed while the patient is undergoing

treatment.

Table 1 shows a description of the 30 patients used to develop the IGRT tool. The

demographic information is shown to point out that several types of diseases and staging were

used for the study, and that an effective ART method should accommodate any type of H&N

patient. Other demographic information, such as whether or not a patient was receiving

chemotherapy during radiation, surgery status, as well as total prescribed dose were also

considered important information to report.

Page 21: An Automated Diagnostic Tool for Predicting Anatomical Response to

8

Table 1: Patient demographic information

Patient Number

Primary Site Stage Pathology Chemo-therapy

Surgery Dose (Gy)

1 Unknown TXN2M0 Squamous Concurrent Post-Op 66.0 2 Unknown TXN1M0 Squamous Concurrent None 70.2 3 Tonsil T2N1M0 Squamous Concurrent None 61.2 4 BOT* rTXNXM0 Squamous Concurrent None 68.0 5 Larynx T3N0M0 Squamous Concurrent None 66.4 6 Epiglottis T2N0M0 Squamous Concurrent None 66.0 7 BOT T1N1M0 Squamous Concurrent Post-Op 66.0 8 BOT T1N1M0 Squamous None Post-Op 60.0 9 Tongue T2N1M1 Squamous Concurrent None 68.4

10 Pharynx T2N1M0 Squamous Concurrent None 62.1 11 Supraglottic Larynx T2N2bM0 Squamous Concurrent None 64.0 12 BOT rT3N3M0 Squamous Concurrent None 68.5 13 Pyriform Sinus T4N1M0 Squamous Concurrent None 70.2 14 Epiglottis TXN2bM0 Squamous Concurrent None 50.0 15 Tonsil T2N2M0 Squamous None Post-Op 61.2 16 Unknown TXN1M0 Squamous Concurrent Post-Op 64.8 17 Nasopharynx T3N0M0 Squamous Concurrent None 70.2 18 Larynx rTXN2M0 Squamous Concurrent None 50.0 19 BOT T4N2M0 Squamous Concurrent None 67.6 20 Retromolar Trigone T2N1M0 Squamous Concurrent Post-Op 52.2 21 Oral Tongue T2N0M0 Squamous None Post-Op 63.0 22 Supraglottic Larynx T2N0M0 Squamous None None 70.0 23 BOT T1N2aM0 Squamous Concurrent None 52.0 24 Unknown TXN1M0 Squamous Concurrent None 63.0 25 Nasopharynx TXN0M0 Adenoid Systic None Post-Op 61.2 26 Retromolar Trigone T2N0M0 Squamous None None 70.0 27 BOT T1N2aM0 Squamous Concurrent None 70.4 28 Tonsil T2N1M0 Squamous Concurrent Post-Op 63.0 29 Nasopharynx T2N1M0 Squamous Concurrent Post-Op 70.4 30 Pharynx T2N0M0 Squamous Concurrent None 59.4 *BOT = BASE OF TONGUE

Page 22: An Automated Diagnostic Tool for Predicting Anatomical Response to

9

1.2 ORIGINAL CONTRIBUTIONS

Many studies published in the literature in the last ten years have identified the problems

associated with anatomical changes in H&N patients by analyzing tumor volume regression,

parotid shift data, parotid volume loss, and dose reconstruction data; but none have determined

which specific patients should (and when) have their plans adapted. In this study, a new type of

data will be analyzed with an approach that has never been used before in order to determine

which patients would "benefit from ART." An outline of the original contributions of this

dissertation is presented below.

1. Development of a method for identifying which head & neck patients would most

benefit from adaptive radiation therapy, with a focus on those patients who will

require a new facemask. The method employs 4 different cross-sectional

measurements, taken from the neck region, and evaluates the change in measurement

for each.

2. Development of a reliability metric for the H&N patients identified as needing

adaptive radiation therapy.

3. Development of a method for predicting when H&N patients will need ART.

4. A metric for reporting reliability when making a prediction, which will aid the

oncologist in making a decision. The reliability metric takes into account how well

measurements are correlated along with a “closeness to crossover” parameter, which

is a measure of how close (in time), a patient’s measurements are to crossing a

particular threshold.

Page 23: An Automated Diagnostic Tool for Predicting Anatomical Response to

10

5. An original automated Image-Guided Radiation Therapy tool to analyze the collected

head & neck measurement data. The two-part IGRT tool will aid oncologists and

physicists in the clinical setting by alerting them if and when patients will require

adaptive radiation therapy during the course of their treatment. With the ability to

make an accurate prediction, time and resources can be pre-allocated to take into

account the extra work load from having to create new treatment plans and/or a new

facemask.

1.3 DOCUMENT ORGANIZATION

A detailed literature survey of current radiotherapy practices, the advent of image-guided

radiation therapy, and the use of new treatment techniques to implement adaptive radiation

therapy is presented in section 2. Next, the methodology for developing the IGRT tool will be

discussed in section 3. Then in section 4, the results of using this tool as an integrated decision

making process are presented. Finally, section 5 concludes the dissertation and provides

recommendations for future work.

Page 24: An Automated Diagnostic Tool for Predicting Anatomical Response to

11

2 LITERATURE SURVEY

This section provides a literature survey of the need and benefits for treating head and

neck cancer patients with adaptive radiation therapy. The survey will give an overview of current

radiation therapy practice, the recent advances in the field of radiation oncology, methods used

for managing uncertainty associated with treating head and neck patients with radiation therapy,

and how using IGRT can be implemented into the ART process.

2.1 INTRODUCTION The current clinical process for developing a radiation treatment “plan” involves first

acquiring an image set of the patient. The majority of patients simply undergo a pre-planning

computerized tomography (CT) scan, while some patients require additional diagnostic scans,

such as a PET scan or an MRI series to better identify the tumor site. The images acquired are

crucial in determining where radiation is to be delivered.

Once diagnostic and planning images are obtained, a computerized treatment planning

system (TPS) is utilized for contouring and calculating doses for any given volume within the

patient. The International Commission of Radiological Units (ICRU) defines the gross tumor

volume (GTV) as “gross palpable or visible/demonstrable extent and location of malignant

growth.” The clinical target volume (CTV) includes gross tumor volume “and/or sub-clinical

microscopic malignant disease which have to be eliminated.” It is the CTV that we hope to

eradicate by the application of radiotherapy. [ICRU Report #50 1978].

Page 25: An Automated Diagnostic Tool for Predicting Anatomical Response to

12

Another important volume used in radiation therapy, also introduced by the ICRU in

1978, is the primary target volume (PTV), and it is typically the clinical treatment site or target

for which a prescription of radiation dose is to be delivered. It is comprised of the CTV plus an

appropriate margin, the extent of which is determined by the physician. The PTV was defined

due to the recognition that there are treatment uncertainties, such as organ motion and/or patient

positioning errors. At the end of the planning process, the product is a treatment plan with

specifications for total radiation dose, daily treatment doses (called “fractions”), field sizes, and

field shapes. Figure 3 illustrates how the target volumes are defined by the ICRU.

Figure 3: Target Volume Definitions

Page 26: An Automated Diagnostic Tool for Predicting Anatomical Response to

13

2.2 ADVANCES IN RADIATION THERAPY Oncologists and physicists have always been concerned with delivering the best radiation

treatment possible to cancer patients without damaging normal, healthy tissue. An article by

Herring and Compton [1970] explains the importance of disease control and normal tissue

complication rates with respect to dose. Based on their evaluation, they recommend that the dose

delivered over the course of treatment should be within ±5% in order to control local disease and

spare normal tissue function. In order to stay within the 5%, accuracy and precision would be

required at each step of the treatment process, and therefore some kind of system would need to

be implemented. This system would take into consideration both dosimetric and geometric

factors, which can adversely affect the quality and accuracy of radiation therapy treatments.

According to Jaffray, Yan, and Wong [1999], the required geometric precision of radiotherapy

treatments must be selected based on treatment site and cost. They explain that on-line

techniques (where the patient is on the treatment table during measurement) are best suited for

identifying intra-fraction setup errors and organ motion during treatment, while off-line

strategies (done outside of patient treatments) best identify inter-fraction (daily) errors.

The ICRU began providing reports for recommendations and standards of radiation in the

1950’s, and the information in ICRU report #50 [1978] has set a standard for the practices in

radiation oncology. As mentioned before, the report defines such things as the CTV: the region

which contains the gross tumor volume (GTV) and then a margin around for microscopic

disease. The planning target volume (PTV) includes the CTV as well as margins to account for

patient setup and organ motion.

Page 27: An Automated Diagnostic Tool for Predicting Anatomical Response to

14

With a set of suggested limits and standards in place, oncologists and physicists

continued to use the available tools for creating and delivering acceptable treatment plans. The

next sections will describe the advancement of those tools, specifically the use of imaging for

treatment verification, and how they have allowed for the advancement of adaptive therapy.

2.2.1 Portal Imaging

Early studies regarding geometric uncertainty in EBRT used portal films to evaluate

setup errors during treatments. A study done by Marks et al. [1974] evaluated 99 lymphoma

patients and detected 330 localization errors (out of 902 films) by using treatment verification

films, or portal films. Their study concluded that “treatment verification films are invaluable in

the detection of localization errors in the treatment of lymphoma, and that they can lead to a

reduction in localization errors if they are used regularly.” Similar studies were done during the

70’s and, due to the realization that verification films were very useful in detecting errors, the

development of electronic portal imaging devices, or EPIDs, began. A pilot study done by Boyer

et al. [1992] looked at the limitations of imaging with high-energy radiation beams as well as the

different techniques used to produce EPIDs. An extensive review of optical systems using direct

x-ray-induced fluorescence was done, and they concluded that EPIDs provided images

comparable to radiographic film (which was the standard until the early 90’s) and were

convenient due to the ability to store and transfer them electronically.

Because the goal of radiotherapy is to treat cancerous tumor cells within the CTV while

sparing normal surrounding tissues, it is necessary that patient setup errors be quantified and

reduced. The early use of EPIDs allowed for the identification and measurement of these errors.

Page 28: An Automated Diagnostic Tool for Predicting Anatomical Response to

15

Work done by Erridge et al. [2003] examined patient set-up, tumor movement, and tumor

shrinkage for lung cancer patients. They produced results which validated the use of EPIDs to

reduce set-up error and to localize the tumor more accurately than with radiographic film alone.

Hurkmans et al. [2001] reviewed clinical practices performed at The Netherlands Cancer

Institute and developed a set of recommended allowable set-up deviations for different cancer

sites, including H&N, prostate, pelvis, lung, and breast. These recommendations were a result of

using EPID images to come up with criteria for “good clinical practice” in patient positioning.

2.2.2 Image-Guided Radiation Therapy

The use of 3D imaging in radiation oncology has become routine in clinical practice. In

the treatment planning process, images of the patient are used by the oncologists to identify the

tumor location as well as where the relative normal tissues are located. The frequent use of

imaging in the treatment room during a course of radiation therapy, where decisions are made

based on these images, is called image-guided radiation therapy, or IGRT. Jaffray explains that

this type of volumetric imaging permits “unprecedented characterization of the diseased and

normal structures with precise and accurate determination of geometric location” [2005].

Dawson and Sharpe [2006] reviewed IGRT in terms rationale, benefits, and limitations.

They determined that the rationale for IGRT was to verify and guide treatments, which is

important for identifying setup errors and daily tumor changes. The benefits of IGRT are the

ability to measure tumor changes and reduce setup error after identification of the error. And, the

major benefit is the ability to verify increased doses to tumors. Without the prior ability to verify

dose distributions, tumors were treated at lower doses so as to not over-dose normal tissue. But

Page 29: An Automated Diagnostic Tool for Predicting Anatomical Response to

16

with the advent of IGRT, higher, monitored doses can be delivered to the patient and local

disease control and survival rates have increased. The only evident limitation is that no one

technology or strategy exists that is appropriate for all clinical situations. But, IGRT does

provide the opportunity to adapt radiotherapy to anatomical changes that could not be done

previously.

Sorcini and Tilikidis [2006] reported on the clinical application of IGRT using a Varian

linear accelerator with an on-board imager (OBI). Their study, conducted in Stockholm, Sweden,

evaluated using the OBI for on-line set-up correction of prostate patients using internal gold

markers. They measured the displacements of the markers and were able to accurately track the

position of patients’ internal anatomy.

The usefulness of IGRT has been extensively researched and is present in most clinical

treatment facilities in the world. IGRT allows for daily identification of the size, shape, and

location of the tumor so that positional adjustments can be made if needed. This advantage

permits better tumor control and reduces the risk of toxicity after radiotherapy is complete.

Computed Tomography (CT)

Originally, a single set of CT scans was acquired prior to the start of a patient’s treatment

and used as a basis for radiotherapy planning. Soon after it was realized that the position of the

tumor could change from fraction to fraction, routine CT-imaging prior to each daily treatment

was adopted. The study done by Barker et al. [2004] evaluated the use of a CT/linear accelerator

combination as a means for measuring anatomic changes during H&N radiotherapy. The results

Page 30: An Automated Diagnostic Tool for Predicting Anatomical Response to

17

from 14 H&N patients showed that significant volumetric and positional changes occurred for

the gross disease. By acquiring daily pre-treatment CT scans, it is possible to monitor tumor

position and help reduce uncertainty in the treatment.

Newer types of CT scans are currently available, such as megavoltage CT (MVCT) and

megavoltage cone beam CT (MVCBCT) which allow for better contrast in the images. And, the

use of integrated CT/linear accelerators is becoming more widely accepted. Helical

TomoTherapy machines include an on-board MVCT scanner to acquire images just prior to

treatment. Elekta’s Synergy is available with an on-board MVCBCT scanner. Megavoltage cone-

beam CT is a technique used to acquire images by using a cone shaped x-ray beam rather than a

conventional linear fan beam, projecting it through the patient, and using a detector on the other

side to record the attenuation of the x-rays through the patient. In contrast, MVCT images, such

as those acquired on a TomoTherapy machine, are made by detecting a fan beam of x-rays

projected through the patient.

Pouliot et al. [2005] conducted a feasibility study of MVCBCT with respect to

acquisition time and quality of images. Because of the newness of the technology, both frozen

sheep and pig heads were scanned and images were reconstructed to evaluate the quality of bony

anatomy identification. It was concluded that MVCBCT produced excellent images; however

surface dose to the skin (2 cGy/film) could prevent acquiring the images every day. Oldham et

al. [2005] evaluated an on-line application for MVCBCT and found that the images produced by

this technique provide for excellent delineation between soft tissue and bone, allowing for

correct setup verification.

Page 31: An Automated Diagnostic Tool for Predicting Anatomical Response to

18

Dawson and Jaffray recently conducted a review of the advances in IGRT and how it can

be extended even further beyond its current uses. With the realization of the clinical benefit that

IGRT has for verifying correct patient positioning and monitoring day-to-day anatomical

changes, studies are now moving forward to not just identify problems, but correct them as well.

Looking on to future applications of IGRT, “Adaptive radiation therapy can now be exploited,

and the impact on replanning or changing the treatment course on the basis of information

obtained during treatment can be investigated” [2007].

2.3 ADAPTIVE RADIATION THERAPY

As mentioned previously, members of the William Beaumont Hospital introduced the

term adaptive radiation therapy as “a radiation treatment process where the treatment plan can be

modified using a systematic feedback of measurements” [1997]. The idea is simply aimed at

improving radiation treatments by monitoring daily changes and integrating the information into

a new, re-optimized plan, which is customized for each patient.

The first studies done using ART involved taking portal films and making set-up

adjustments at that time [Bel et al. 1995; Michalski et al. 1996]. This approach is termed on-line

ART because the adaptation is done while the patient is lying on the table and used to make

decisions regarding shifting a patient’s position on the treatment table. A newer approach, off-

line ART, is becoming more popular and it is believed to be better in terms of correcting for

systematic and random errors [Hong et al. 2005]. Off-line adaptive therapy involves using data

which has previously been collected and then evaluating it later in order to make a decision.

Page 32: An Automated Diagnostic Tool for Predicting Anatomical Response to

19

Currently, there is sufficient literature to support the idea of adapting plans during the

course of treatment for cancer patients. Brabbins et al. [2005] conducted a dose-escalation trial

on prostate patients and determined that using adaptive therapy to locate the internal location of

the tumor, along with radiotherapy, allowed for reduced toxicities after the completion of

treatment. Yan et al. [1997b] used adaptive modification of treatment planning to minimize the

effects of set-up error, and work done by Sharpe et al. showed that adaptive planning and

delivery is extremely effective in accounting for anatomical changes induced by radiation

therapy [2005]. One specific new application involves tracking lung motion and lung tumors

within the lungs. Using adaptive therapy along with image-guided TomoTherapy treatments has

been studied for the last few years [Ramsey et al. 2004, 2006], and new techniques have been

developed which have lead to the better treatment of lung cancer patients.

Only a few recent studies have addressed the issue of ART for those patients

specifically with H&N cancer. There have been studies which identify the dosimetric effects due

to not replanning. For example, Jenkins et al. used daily MVCT imaging to determine dose

variations for H&N patients. They found that it is necessary to perform daily imaging so as to not

incorrectly treat patients [2005]. Doemer et al. came to a similar conclusion after analyzing 15

H&N patients and determining that adapting patient plans was frequently necessary due to

increasing doses to the spinal cord [2008]. Other studies, such as the one done by Wu et al.

[2007] looked at adaptive replanning strategies accounting for tumor and normal tissue shrinkage

for patients undergoing treatment. These studies emphasize how essential it is to evaluate both

tumor volume response and dose distributions when treating H&N areas.

Page 33: An Automated Diagnostic Tool for Predicting Anatomical Response to

20

In the literature, there have been no new approaches for identifying which patients would

benefit from ART and then when exactly those plans should be adapted during the course of

radiotherapy treatment. The first step in ART is identifying the places for possible uncertainty

and error in a patient’s radiotherapy treatment.

2.3.1 Identifying Uncertainty and Error

A review was conducted of the advances in identifying uncertainty [Jaffray et al. 1999].

From this reviewing of the current literature, it was found that uncertainty typically is

characterized as either setup errors or organ motion. Setup errors are a result of variations in the

positioning of the patient’s bony anatomy, while organ motion is variation in the target (tumor)

within the patient. It was concluded that the magnitude of organ motion errors were significantly

less than setup errors and accommodations in planning procedures should be made.

As mentioned before, the study done by Hurkmans et al. [2001] did an extensive

evaluation of set-up verification using portal imaging in their clinic. The study separated setup

errors into two main classes: random and systematic. Random errors are inter-fraction errors

which are deviations between fractions, and systematic errors are deviations between the planned

patient position and the average patient position over a course of fractionated therapy.

Being able to identify potential errors allows for correction before a patient is treated

incorrectly. As such, daily setup variation should be incorporated into plans in order to achieve

the best treatment. Studies have been done on predicting systematic and random errors [Zeidan et

al. 2007; Allen et al. 2007] and how they can be used in conjunction with the ART process. But,

Page 34: An Automated Diagnostic Tool for Predicting Anatomical Response to

21

while evaluating setup error is extremely important, other sources of error should not be

overlooked when using an adaptive approach to radiation therapy.

2.3.2 Anatomical Changes

During a radiotherapy course, many H&N patients experience substantial anatomical

changes due tumor response, nodal mass response, edema, or overall weight loss. Wang [2007]

looked at how body weight loss is associated with error in treating nasopharyngeal patients.

Barker [2003] measured tumor volumes at the beginning and end of treatments, finding that the

response to radiation can cause drastic changes in volume over the course of therapy. Meldolesi

[2006] examined skin volume reduction and parotid size change in nine H&N patients. Lee

[2007] measured how the parotid glands physically moved with respect to the tumor. All four

studies measured and identified particular anatomical changes that can occur over the course of

radiotherapy treatments, which should all be taken into consideration by the radiation therapy

staff.

These changes in anatomy can prove to be a problem if they affect the quality of a

patient’s radiation treatment. A few studies have identified H&N patients as potential ART

candidates due to their response to radiation treatment [Feng & Eisbruch 2007; Meyer et al.

2007], while several studies have evaluated tumor response by looking at images taken from

various IGRT devices. Because immobilization devices, such as masks and bite-blocks placed in

the mouth, are typically used to hold the patient in place during treatment, losing weight or a

reduced tumor mass can take away from the effectiveness of the device. As such, it might be

necessary to adapt the patient’s treatment plan to account for these specific anatomical changes.

Page 35: An Automated Diagnostic Tool for Predicting Anatomical Response to

22

Tumor Response to Radiation

Barker et al. [2004], as previously mentioned, conducted a retrospective study to

determine the dosimetric effects on normal tissue and tumor volumes using daily CT scans and

replanning during the course of IMRT. The study reported that the gross tumor volume

decreased throughout the course of radiotherapy at a median rate of 1.7% - 1.8% per day, and the

volume loss was frequently asymmetric. Figure 4 shows an example of how rapid anatomy can

change (especially the tumor) during radiation therapy. The left panel depicts the CT image prior

to treatment, and the right panel shows the same image after the patient had undergone 3 weeks

of treatment. Changes such as this would most likely require the patient to have their plan

adapted during the course of the typical (6-7) week treatment period.

Figure 4: Rapid anatomy changes for one H&N patient (a) before and (b) three weeks into

radiation therapy [Barker et. al 2004]

(b) (a) (b)

Page 36: An Automated Diagnostic Tool for Predicting Anatomical Response to

23

Weight Loss

H&N patients can also experience significant weight loss during their course of

radiotherapy. This can be due to tumor shrinkage or overall weight loss. This effect, along with

tumor response, could potentially result in an under-dosage of the tumor/target tissue or an over-

dosage to normal tissues. The same study done by Barker with regard to tumor response also

involved evaluating dosimetric consequences due to weight loss. The study reported a median

weight change from start to finish of treatment as -7.1%. Therefore, it is evident that weight loss

plays a significant role in determining if a patient will require ART. A review of IGRT in terms

of its rationale, benefits, and limitations was previously mentioned [Dawson & Sharpe 2006].

Figure 5 shows two cone-beam CT images taken of the same patient at the beginning of

treatment (A) and then after 3 weeks (B) of treatment, and anatomical change (weight loss) can

clearly be seen when using this imaging modality.

Figure 5: Cone-beam CT image showing anatomical response to radiation therapy [Dawson &

Sharpe 2006]

Page 37: An Automated Diagnostic Tool for Predicting Anatomical Response to

24

2.4 IGRT FOR ART IMPLEMENTATION

The use of daily imaging has been proven to aid oncologists and therapists is identifying

when changes have occurred during the course of therapy. With these daily images, it is possible

to conduct retrospective analysis for any given patient. At the Thompson Cancer Survival Center

in Knoxville, TN, a Helical TomoTherapy machine is used daily to treat the majority of H&N

patients who are undergoing external beam radiation therapy. This machine includes an on-board

CT scanner which is used to acquire daily pre-treatment images for the purpose of verifying the

correct set-up of the patient. When a patient is finished with their treatment, their information is

archived in a database.

The most significant ART studies done using IGRT were done by Barker [2004] and

Hansen [2006]. Both authors extensively evaluated volumetric, geometric, and dosimetric effects

on H&N patients without replanning their treatments. In 2004 Barker used “high-definition

imaging studies throughout the treatment course to quantify the magnitudes and rates of three-

dimensional anatomic changes occurring gradually over time.” It was concluded that GTV’s

typically regress asymmetrically and that the results might have potential dosimetric implications

when conformal treatments are used. In 2006, Hansen and his co-authors followed up that study

by performing repeat CT scanning to evaluate doses to target volumes and normal tissues. They

reported not being able to separate the effects of tumor shrinkage and weight loss due to the

small sample size (n=5) of patients. And, that “future studies should investigate the effects of

asymmetric location of normal structures and/or target volumes, asymmetric volume losses, and

the use of multiple asymmetric beam angles on the dosimetry of target volumes and normal

Page 38: An Automated Diagnostic Tool for Predicting Anatomical Response to

25

tissues.” The results from these two studies suggest the great need for a tool to aid clinicians in

determining which patients should have their plans adapted.

It is known that imaging in the treatment room has provided extensive data which

documents movement and anatomic change of targets during and between treatment fractions.

Many investigators demonstrated this in their studies by using 3D images to obtain tumor

response measurement data [Kupelian et al. 2005; Sohaib et al. 2000; Siker et al. 2006]. This

data can be incorporated with the various anatomical changes in order to manage target

movement and setup uncertainty. For example, a few studies have reported using daily CT

images to aid in adaptive procedures [Liang et al. 2008; Lu et al. 2006]. The only question that

has yet to be answered is how exactly to implement the data (obtained from IGRT devices) into a

decision making process.

2.5 CLOSING REMARKS

Most studies that have involved measuring initial tumor volume propose a “percent loss

threshold” approach for future studies in order to make an ART decision. For example, one

recent study suggested that if the tumor volume is reduced by 30%, then a new plan should be

created [Woodford et al. 2007]. This is a valid approach, but patients can lose body weight

without losing tumor volume.

For instance, if a patient were prescribed 1.8 cGy in 35 fractions but then lost a

significant amount of weight in the first 15 fractions, it would be of interest to the oncologist to

as to how this would affect the outcome of the remaining 20 fractions of the radiation treatment

plan. If one were to use the proposed 30% tumor reduction threshold but the patient lost enough

Page 39: An Automated Diagnostic Tool for Predicting Anatomical Response to

26

weight in the first 15 fractions to cause the parotid glands to move into the PTV and the tumor

never reached that threshold, one could potentially damage the patient’s salivary function.

As stated before, the best way to have an overall successful outcome of treatment, both

accuracy and reproducibility must exist in radiotherapy. When a patient loses weight or

experiences anatomical change to radiation, these two principles can become compromised.

Accuracy is lost if a tumor is no longer receiving the prescribed dose (thereby under-dosing) or if

normal structures are receiving too much dose. Reproducibility is missing if the immobilization

device is no longer effective.

Therefore, if a patient loses weight in the neck and an immobilization device is no longer

effective, then they should have their treatment course adapted, and more than likely will require

construction of a new facemask. By using a percent threshold, a patient may be overlooked when

adapting their treatment plan would be the more appropriate line of action. By using cross-

sectional measurement data to evaluate changes in daily measurements, weight loss patients and

tumor response patients can be distinguished and a more informed decision can be made about

which patients (and when) would "benefit from ART." More specifically, this dissertation

focuses on developing a modified clinical decision support system to aid clinical staff in decision

making-processes.

Page 40: An Automated Diagnostic Tool for Predicting Anatomical Response to

27

3 METHODOLOGY

This section will describe the basic methodology for developing the IGRT tool, which is

the major contribution of this dissertation. The main purpose of this tool is to help in answering

two questions. The first is, “How do we know which patients will benefit from adaptive radiation

therapy?” The second is, "If a patient will require adaptive therapy, at what point in their

treatment should an adaptive approach be implemented?" In other words, "How do we know

which patients are incurring large anatomical changes, be it weight loss or tumor response?" and

"At what point during the course of treatment should we be concerned about delivering an

incorrect radiation dose due to an ineffective facemask and/or treatment plan?"

The solution to answering these questions presents itself twofold: solving a diagnostic

problem in identifying “who,” and then dealing with a prognostic-type problem to predict

“when.” Answering these questions ultimately identifies those head and neck patients who would

benefit from adaptive radiation therapy, and finding the answers is the main focus of this

dissertation. Once the answers are known, the ultimate result is either:

1. The patient is not experiencing enough anatomical response to radiation treatment to

require adaptive therapy.

2. The patient is experiencing significant anatomical response to radiation and will

require adaptive therapy. If so, then:

a. The patient needs the creation of a new treatment plan but no new facemask.

b. The patient needs the construction of new facemask but no new treatment

plan.

Page 41: An Automated Diagnostic Tool for Predicting Anatomical Response to

28

c. The patient needs both a new facemask and a new treatment plan.

To help in answering these questions, cross-sectional measurement data taken from thirty

patients’ neck regions were evaluated based on the amount of change in the first few treatment

fractions. By identifying those patients whose measurements exhibit early "significant change," a

decision can be made as to whether or not a new mask should be made and/or if a new treatment

plan should be developed. If the answer to the first question is "yes", then the second question

will be answered by predicting at what point in time adaptive therapy is required. If the answer is

“no” then it is assumed that the patient should require no re-evaluation.

In this dissertation “significant change," according to the physician at the institution

where the data was collected, is a change greater than 0.5 cm in any of the four cross-sectional

measurements. The patients who exhibit this magnitude of change would potentially require

ART, and these are the patients who are the most important to identify and correct for at the

appropriate time. A description of how the data was acquired and then how it was used to

develop a diagnostic method for answering the first question will be presented, along with the

techniques used to make a time prediction, thereby answering the second question.

3.1 DESCRIPTION OF THE DATA

It is known that human tissue responds to radiation: large tumors tend to significantly

decrease in size over the course of treatment, and patients tend to lose weight due to side effects

of radiation and/or chemotherapy (nausea, dysphasia, etc.). Because H&N patients often

Page 42: An Automated Diagnostic Tool for Predicting Anatomical Response to

29

experience gradual changes in their anatomy during radiotherapy, daily CT scans are very useful

in identifying those changes. For the TomoTherapy machine, once a patient is scanned, the

original planning CT image is fused along with the new scan. By looking at the superimposed

images, significant changes in anatomy can easily be identified and measured.

3.1.1 Data Collection

Figure 6 shows an example of a patient who experienced "significant" anatomical

changes, and for this case, it was attributed to overall weight loss. The gray image is the original

planning image, which was acquired before the patient began treatment. The blue image was

taken in the 5th week of treatment. It is apparent that the patient had an overall reduction in the

neck region and perhaps more on the right side near the C2-C3 level.

Previously discussed studies have evaluated data from volumetric changes in tumors and

normal tissues. The results from the Hansen study show that no distinction could be made

between the significance of tumor response and weight loss. However, by taking cross-sectional

measurements of the head and neck region and looking at the rate of change in the first few

treatments, it appears as though this distinction could be made. For example, if measurements

taken on the right side of the face change at a faster rate than those on the left side, then one

could say that patient was experiencing tumor response to radiation. However, if all

measurements around the head and neck region are changing rapidly, one might say that patient

is simply losing weight.

Page 43: An Automated Diagnostic Tool for Predicting Anatomical Response to

30

Figure 6: Patient 7 undergoing anatomical changes

Data Collection Tool

A total of 1450 MVCT images were obtained from the TomoTherapy archive database

and evaluated for tumor response, weight loss, and anatomical changes. Cross-sectional head and

neck data was collected for 30 H&N patients for a total of over 30,000 measurements. These

measurements were taken using each patient’s daily pre-treatment MVCT scans and included

anterior-posterior (AP) and lateral (LAT) measurements. The AP measurement is beneficial

when evaluating incorrect positioning; however it was used here only for the purpose of locating

the midpoint of the patient’s head. Therefore, two lateral measurements were taken with respect

to the center of each vertebral body, and two other lateral measurements were taken with respect

to the midline of the patient’s head, which totals 4 lateral measurements for each available

vertebra (maximum of 5). Figure 7 shows how these measurements were taken with respect to

Page 44: An Automated Diagnostic Tool for Predicting Anatomical Response to

31

the center of the vertebral bodies. Starting with the red solid line in the figure and moving

clockwise around the head, we have measurements one through four, respectively.

A graphical user interface (GUI) was developed by Dr. Alexander Usynin at the

Thompson Cancer Survival Center in Knoxville, TN for the purpose of taking these cross-

sectional measurements in a consistent and efficient manner. The tool was created in MATLAB

and was used to recall archived imaging data, (from the TomoTherapy database) which allowed

for the collection of the cross-sectional measurement data. Figure 8 shows an example of this

tool, and it was used to collect all data from the 30 H&N patients. Once all daily MVCT images

are loaded, the user has the ability to scroll through the kVCT (original) scan until the desired

slice if found. Then, the user “Accepts Slice” and the corresponding MVCT image appears in the

window, and now has the ability to pick any point on the scan to take a measurement.

Page 45: An Automated Diagnostic Tool for Predicting Anatomical Response to

32

1

2

4

3

Figure 7: Cross-sectional measurement definitions

Figure 8: Data collection tool

Page 46: An Automated Diagnostic Tool for Predicting Anatomical Response to

33

In the right panel, the blue lines indicate the location of the slices selected for that

particular day (which are not usually chosen consistently from day to day). The green lines

shows the corresponding slice which is seen in the left panel. In order to maintain the

consistency of the data for each patient, the center of each cervical vertebral body was chosen as

a bony landmark. Most of the studies done with H&N patients, especially those evaluating setup

error and uncertainty use the cervical vertebra as landmarks, therefore this was the reasoning

behind using them in this study. Measurements were taken at the center of each vertebral body,

extending from C1-C5. There are 7 cervical vertebrae, however the shoulders appear around C5

or C6 for most patients, and it was found that the information taken at that level is not very

useful for modeling.

Because the images were acquired at a 3-mm slice thickness, one is able to see parts of

each vertebral body throughout the slices. However, to maintain consistency, the center of each

vertebra was located and used as the reference point for taking a measurement. In other words,

measurements were taken where each vertebra appeared intact. Once the point of reference is

established, the tool automatically takes the four lateral measurements and then the user can save

the values to a spreadsheet. The process is repeated for all available vertebra and then for each

daily MVCT scan. As a note, data for some of the vertebra was missing due to some patients not

being scanned with the same consistent set of slices each day. This was corrected for by linearly

interpolating between points.

Page 47: An Automated Diagnostic Tool for Predicting Anatomical Response to

34

3.1.2 Reduction of Dimensionality

The dimensionality of all data sets is extremely high, so all data was broken up for each

vertebra. For example, patient 1 was treated for 40 treatment fractions and 4 lateral

measurements were taken for all 5 vertebra. If the data is separated by vertebra, then the results

are five 40X4 matrices (4 measurements at each observation). If separated by measurement, then

the result is four 40X5 matrices. The next consideration was the number of models that should be

created. It was assumed that, instead of combining all measurements from all vertebrae together,

a more meaningful approach would be to create 4 models, one for each cross-sectional

measurement. The rationale for this approach will be discussed later in the results section.

Even after partitioning the data to build 4 different models, a large amount of data still

exists, so a trimmed mean was performed for all measurements with respect to each vertebra in

order to reduce the dimensionality of the data. The built-in MATLAB function “trimmean” was

used for this process, where for a matrix input X, the output m is a row vector containing the

trimmed mean of each column of X. The function works by calculating the mean of X, while

excluding the highest and lowest (percent/2) percent, which is a scalar between 0 and 100.

The trimmed mean is a robust estimate of the location of a sample. If there are outliers in

the data, the trimmed mean is a more representative estimate of the center of the body of the data

than just the mean. The highest and lowest 25% of the data was excluded, meaning that only half

of the data was used to calculate a new (condensed) representation of each measurement. This

value was chosen because visual inspection of the data showed that usually there were only two

(at most) outlying measurements whose trends do not follow the other data.

Page 48: An Automated Diagnostic Tool for Predicting Anatomical Response to

35

To clarify this operation, consider the example presented in Figure 9, which shows an

example of how the trimmed mean was performed to reduce the dimensionality of the data. First,

a 2nd order polynomial line was fitted for each measurement taken (totaling 16 lines per patient)

because most of the data visually followed this type of trend. Next, the regression coefficients for

each line were calculated and then a trimmed mean was performed in order to get a new set of

regression coefficients. Finally, these new coefficients are used to calculate a line which

represents the underlying trend for the previously existing measurements.

Figure 10 illustrates the results from performing a trimmed mean for the measurements

taken at the C1-C4 levels. It can be seen that the dimensionality of the data is reduced while still

preserving a representation of the underlying trend of the rest of the data. The “trimmed mean”

line shown in BLUE STARS in the figure was plotted as a result of the new calculated regression

coefficients. The other lines are simply shown to illustrate how the new line is a valid

representation of the original data.

Visual inspection of the data allowed for the detection of outliers, meaning those lines

whose slopes did not follow the rest of the data. For example, Figure 11 shows one patient's

cross-sectional measurements over a period of 46 treatments. It illustrates how the measurement

taken at C1-C3 followed the same decreasing trend (and whose slope was negative), but the

measurement at C4 showed an increase at the beginning of treatment but then eventually a

decrease.

Page 49: An Automated Diagnostic Tool for Predicting Anatomical Response to

36

Example Set of Regression Coefficients

Figure 9: Numerical example for calculating a trimmed mean

Remove Highest and Lowest 25% of

(Outlying) Data and Take the Mean to get new Regression Coefficients

0009.00577.09755.4

0008.00526.05238.5

0011.00647.06171.6

0002.00249.02622.7

0009.00551.00705. 6

Page 50: An Automated Diagnostic Tool for Predicting Anatomical Response to

37

Figure 10: Trimmed Mean Performed

0 5 10 15 20 25 30 35 40 45

6

7

8

9

10

11

Fraction Number

Cro

ss S

ectio

nal M

easu

rmen

t(cm

)

Trimmed Mean

Vert1

Vert2Vert3

Vert4

Figure 11: Illustration of removing outlying data

Page 51: An Automated Diagnostic Tool for Predicting Anatomical Response to

38

To summarize, a trimmed mean was performed for all measurements, and the

dimensionality of the data was reduced. So now for a patient with R number of treatment

fractions, the dimensionality is reduced from four matrices with a dimension of [RX5] to four

[RX1] vectors. This results in just four measurement lines per patient instead of using the

original 16. It should also be noted that all measurements taken at vertebra 5 were excluded due

to a lack of patient data and non-contributory support for the remainder of the data. Also, from

this point forward the term “measurement (number) x” will refer to the resulting line from which

a trimmed mean was performed, where x is one, two, three, or four.

3.2 DESCRIPTION OF THE DIAGNOSTIC PROCESS

The first part of the IGRT tool is intended to help in identifying patients who would

require/benefit from adaptive therapy, meaning either the construction of a new mask or the

creation of a new treatment plan. After the first few measurements are taken, an estimate needs

to be made as to whether or not (within some amount of confidence) a patient might need ART.

Because the physicians (i.e. oncologists) do not always have the ability or the time to evaluate

each patient’s daily MVCT scan for changes, this part of the tool allows for taking measurements

and then reporting back to the doctor if closer attention is required.

In order to correctly notify the oncologist of a potential problem, there needs to be some

sort of criterion that, when reached, would be considered a means for implementing ART,

whether it be to make a new mask or develop a better radiation treatment plan. According to the

physician who treated all of the H&N patients in this study, a change of 0.5 cm in any of the four

measurements would be considered a big enough change to require extra attention. Typically in a

Page 52: An Automated Diagnostic Tool for Predicting Anatomical Response to

39

clinical radiation therapy setting, half of a centimeter is considered the threshold for

repositioning patients and thereby reducing setup and treatment error. Therefore, using the same

value was the rationale behind selecting a cross-sectional measurement threshold.

3.2.1 Tools for Diagnosis

The next section will describe the methods used for detecting trends in the cross-sectional

measurement data. The ultimate goal of the diagnostic part of the IGRT tool is to identify those

patients who will meet the physician-specified criteria for re-evaluation of treatment. The answer

to the question for this part of the tool is a simple "yes" or "no."

Trend Detection

Visual inspection of the data shows that most patients’ cross-sectional measurements

either decreased to some extent or had basically no change over the course of treatment. A

decrease is acceptable as along as it does not cause the immobilization device to become

ineffective, but if the decrease is too severe, the patient might need to have a new mask made

and/or a new treatment plan developed.

It was hypothesized that patients who had significant decreasing trends in their cross-

sectional measurements would be candidates for adaptive therapy. The data from each individual

measurement was tested using the Mann-Kendall (MK) test [Dietz & Killeen 1981], which is a

non-parametric multivariate statistical test used for the detection of monotone trends. The test

considers whether the cross-sectional measurement values tend to increase or decrease with time.

Page 53: An Automated Diagnostic Tool for Predicting Anatomical Response to

40

As opposed to parametric trend detection methods, the MK test makes no assumption with

regards to the functional form of the trend.

For the MK test formalism, let: (X1, X2…Xn) be a sequence of measurements observed

sequentially over time. The hypotheses to be tested are:

H0: the observations are randomly ordered;

H1: the observations have a monotonic trend over time.

The test statistic is given by:

KX =

ji

ij XXsign )( (3.1)

If H0 is the true test statistic, then KX is asymptotically distributed according to the Normal

distribution N (0, σ2), where the variance is given by:

σ2 = 18

)52)(1( nnn (3.2)

A confidence interval of α = 0.95 was used for each test.

The Mann-Kendall test was the first technique used with the data. A few patients’

measurements displayed no change over the course of treatment. However, because most patients

experienced some sort of decline in their measurements, the test basically identified all patients

as those with trends. This information is not very useful in terms of identifying those patients

whose measurements are significantly decreasing, so another method, rates of change, was used

to distinguish between slightly and very “trendy” data.

Page 54: An Automated Diagnostic Tool for Predicting Anatomical Response to

41

Change in Measurement

Another method for evaluating trends, or behavior of the data, is to simply look at the rate

of change, or the change in each measurement over a given period of time. As mentioned before,

because most of the data follow a decreasing trend, a 2nd order polynomial was fitted, and the

regression coefficients were calculated. Once the regression coefficients are known for the first

few observations, the data can then be extrapolated out to see where (if at all) the threshold of 0.5

cm is crossed for any of the four measurements. If the threshold is predicted to actually be

crossed, then a flag should be put up to say that a patient’s measurements will cross 0.5 cm at

some point in the future.

To illustrate this method we will first examine the true patient data. Figure 13 shows the

four true measurements for Patient 8. Recall that these lines are the result of reducing the

dimensionality of the data, and were calculated by using a trimmed mean technique on the

regression coefficients. The data lines are shown as BLUE stars and the RED dotted lines were

calculated by subtracting 0.5 cm from the patient’s first data point and then plotted straight

across the figure's x-axis. The crossing point, shown as MAGENTA x’s is the point where the

measurement (in BLUE) changes by 0.5 cm (RED).

For example, measurement one in the figure begins with a value of approximately 7.4

cm. Also recall that measurement "one" was taken laterally from the center of the patient’s head

to the right side of the face, which is represented as the solid red line in Figure 12. Therefore, if a

measurement of the cross-section is taken and is less than 6.9 cm, then the physician-specified

criterion has been met, and it is possible that the patient will require adaptive therapy. This, of

Page 55: An Automated Diagnostic Tool for Predicting Anatomical Response to

42

course, is at the discretion of the physician, but it is necessary to at least communicate that the

patient is undergoing significant anatomical changes so that an informed decision can be made.

One case where the oncologist might not request a new mask is when a measurement

crosses by 0.5 cm, but the patient only has a few treatments remaining. This can be seen in

Figure 14 where Measurement One for Patient 8 appears to cross the threshold at 27/30

treatments, and Measurement Two crosses at 26/30. The feasibility of constructing a new mask

should also be taken into account; however this issue was not addressed and is also at the

discretion of the doctor.

Other patient demographic data, such as weight loss, primary disease site, disease stage,

and chemotherapy were considered useful in making a diagnosis. For example, if a patient lost a

certain percent of their overall body weight after the first few treatments, then this could be used

as an indicator along with cross-sectional measurement changes for making a diagnosis.

Appendix B contains a table which reports the weight loss data that was collected for each of the

30 patients. When possible, weights at the beginning and end of treatment were recorded and the

total percent of body weight lost was calculated.

Figure 12: Depiction of measurement “one”

1

Page 56: An Automated Diagnostic Tool for Predicting Anatomical Response to

43

Figure 13: All measurements changed by 0.5 cm

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Two

Trimmed Mean Line

Crossover Line

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Trimmed Mean Line

Crossover Point

Patient 8

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 57: An Automated Diagnostic Tool for Predicting Anatomical Response to

44

Figure 14: No measurements cross the 0.5 cm threshold

5 10 15 20 25 30 35 40 45 50

3.5

4

4.5

5

5.5

6

6.5

7

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35 40 45 50

3.5

4

4.5

5

5.5

6

6.5

7

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Two

Trimmed Mean Line

Crossover Line

5 10 15 20 25 30 35 40 45 50

3.5

4

4.5

5

5.5

6

6.5

7

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Trimmed Mean Line

Crossover Point

Patient 18

5 10 15 20 25 30 35 40 45 50

3.5

4

4.5

5

5.5

6

6.5

7

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 58: An Automated Diagnostic Tool for Predicting Anatomical Response to

45

3.2.2 Measure of Reliability

In order to provide meaningful results from the diagnostic part of the tool, some sort of

reliability measure is needed to provide confidence with which a decision is to be made. In other

words, if a patient’s initial measurements suggest that the 0.5 cm threshold will eventually be

crossed, then “how sure” are we, that a request for re-evaluation should be made to the

oncologist. This is important from an efficiency standpoint, in that there is no point in wasting

the doctor’s time, essentially, unless the reliability is high that a measurement will cross the

threshold.

Traditional reliability analysis involves the collecting of statistical information which can

reveal the underlying mechanism with which a system will fail. Therefore, a correlation analysis

was carried out for the measurements taken at each vertebra to see if this information could be

used as a measure of reliability. As a note, the built-in MATLAB function corr.m was used to

calculate all correlation coefficients.

All of the collected data was done so retrospectively which allowed for the analysis of

completed data sets. Information was also available concerning which patients had large

reductions in their cross-sectional measurements, as well when their measurements changed by

more than 0.5 cm. This information was used in the modeling process discussed in the next

section, and it was shown that, as more observations (measurements) were made, the correlation

of the data increased, and the percent error with which a prediction was made then decreased.

Therefore, the correlation information proved to be a valid means for reporting preliminary

reliability, and this information will be utilized later for an overall measure of reliability.

Page 59: An Automated Diagnostic Tool for Predicting Anatomical Response to

46

3.3 DESCRIPTION OF THE PROGNOSTIC PROCESS

As mentioned before, when a patient loses weight or has significant anatomical response,

normal structures may shift into areas of higher dose or the actual tumor volume may have

changed so much that it is no longer receiving the complete radiation prescription. The questions

to be answered in this work can be thought of as part of a prognostic process, where, given a

current state and point in time, predict the future state at the correct time. The area of prognostics

is not the main focus of this dissertation. However, the prognostic framework used to solve

common problems in industry can be applied to the data at hand in order to make a prediction.

For example, a classic prognostic system in industry involves predicting when all or part

of a machine will fail. In this process, statistical data that are collected for the object of interest

are used to evaluate some degradation parameter, which leads to an understanding of the

underlying failure mechanism. Then, the best-suited prognostic model is chosen so that a reliable

prediction for failure can be made. Common terms used in prognostic systems are “remaining

useful life,” “degradation process,” and “time to failure,” which have similar meaning to the

terms used when describing a patient’s anatomical response to radiation. This type of system is

very similar to the one presented in this dissertation. Up to this point, statistical correlation data

has been acquired and the underlying trend of the data is understood. The next and final step in

the process is to make a reliable prediction about when a patient would require adaptive radiation

therapy.

This section will describe how a time prediction can be made, which is the basis for the

second part of the IGRT tool. Once an answer to the first question is found, then the data and its

underlying relationships can then be used to make a prediction about at what point in time the

Page 60: An Automated Diagnostic Tool for Predicting Anatomical Response to

47

patient's cross-sectional measurements will (if ever) cross the threshold of 0.5 cm. This is of

particular importance because it allows for pre-allocation of time and resources of the clinical

staff to prepare for an adaptive course of treatment. And as a note, the “point in time” is assumed

in terms of the treatment number where the threshold is crossed. In reality, patients are only

treated during a typical work week (Monday – Friday), but the predicted “point in time” is based

on the assumption that a patient is treated everyday of the week. This is a minor detail, however

it is important to clarify that time is really treatment number instead of elapsed number of days.

3.3.1 Prediction Model Selection

The ultimate deciding factor in model selection should be how well a given model can

characterize the data. Two types of modeling techniques, non-parametric and parametric, are

commonly used to make future predictions when given a set of data. A non-parametric model is

defined by historical data and a process for applying the historical data to estimate new query

observations. An example is weighted regression, where the historical data are memory vectors

selected from normal, fault-free system behavior. This non-parametric technique involves

estimating the similarity of a query observation to memory vectors in order to determine weights

and perform a weighted average of the memory vectors. A parametric model, in contrast, is

defined by a set of parameters and a mathematical framework for applying the parameters. An

example is linear regression, where the parameters are the regression coefficients and the

mathematical framework is a known function which uses the parameters to define the behavior

of the model.

Page 61: An Automated Diagnostic Tool for Predicting Anatomical Response to

48

Kernel Classification

Initially, the task of identifying patients who required adaptive therapy was considered

in terms of a classification problem, where data from "old" patients were used to predict the class

of "new" patients. For example, if one H&N patient exhibited significant change in the first few

treatments, then that patient would be classified differently than one whose cross-sectional

measurements showed little change. Then once a database of pre-classified patients was formed,

new patient data could be compared to old data in order classify that new patient. For this

problem, the classes were "abnormal" or "normal", where abnormal patients' measurements

significantly decreased over the course of treatment, where normal patients' measurements did

not. So initially a non-parametric kernel regression technique was used to predict which patients

would require adaptive radiation therapy.

The basis of kernel regression is to estimate a response using a weighted average of point

in a training set, which are local to the query point. For this study, the training set contained all

of the historical cross-sectional measurement data. This data had been previously evaluated so

that each patient was already classified as abnormal or normal. Since the classes are already

known, the kernel classification model can learn the relationships present in the historical data.

This type of model is slightly different than typical types of kernel regression models because the

output is not a prediction of cross-sectional measurements, but the class to which a certain

patient belongs (abnormal or normal). So, a patient whose measurements cross the 0.5 cm

threshold would be classified as abnormal, whereas those who do not would be considered

normal.

The kernel classification framework is comprised of three basic steps:

Page 62: An Automated Diagnostic Tool for Predicting Anatomical Response to

49

1. The distance of the query input (new data) to each of the vectors in the memory

matrix (historical data) is calculated. There are many distance functions available,

but the most common is the Euclidean distance, also known as the L2-norm. For a

single input, the Euclidean distance for the ith input and a query is given by:

i iqiXqXd

2)(),( (3.3)

where: X is the memory vector of historical data,

q is the query observation

For a single query vector, this calculation is repeated for each of the memory

vectors which then results in a matrix of distances.

2. The distances are then supplied as inputs into the kernel function, which converts

the distances into weights. A kernel function is used to transform the distance

between the query point and observation into a similarity, or normalized weight.

In general, query data points which are “close” to the memory matrix data points

are weighted more heavily, so large distances should have small weights and

small distances will have large weights. One commonly used function is the

Gaussian kernel, known as:

22

2

2 2

1h

d

eh

dK

(3.4)

Page 63: An Automated Diagnostic Tool for Predicting Anatomical Response to

50

where: d is the distance calculated in Equation 3.5,

h is the kernel's bandwidth

The kernel's bandwidth, h, is used to control which distances are considered "local." If a

bandwidth is too large, the predictions are over-smoothed and all classifications will be the same,

and a small bandwidth will restrict the number of available training points in the vicinity of the

test points.

3. The last step is to use the weights calculated in Equation 3.6 to make a prediction. For

classification problems, the standard kernel estimation technique is modified to

estimate the probability of the new dataset, belonging to one of two classes. This

modified type of kernel classification comes from an application for classification

using kernel product estimators and a decision rule [Cooley & MacEachern 1998].

The final class estimate is made with the following equation:

PRk(x) =

n

ii

n

ii

ki

hxxK

hxxKT

1

1

)(

)/)((

)/)(( (3.5)

where: xi is the ith training observation,

Ti is 1 if the response is a member of set k and 0 otherwise,

Page 64: An Automated Diagnostic Tool for Predicting Anatomical Response to

51

K is the Gaussian kernel function (Equation 3.6)

n is the total number of training observations,

Extrapolation-Based Prediction

In contrast to the kernel regression method, a parametric extrapolation-based model was

constructed, where a polynomial function and its regression coefficients are used for model

development. The equation for a second-order polynomial, or quadratic, is commonly known as:

y(t) =b0 + b1t + b2 t2 (3.6)

where b0, b1, and b2 are the linear regression coefficients. The function y(t) is defined to be an

approximating curve that provides the best fit of the collected measurements. Once the

regression coefficients are known for certain measurements, a prediction is made through

extrapolation of y(t) in a moment of time (t) in the future. The rationale for using this type of

model is that the underlying characteristic of the data appears to decrease gradually over time;

therefore complex modeling is not necessary and a generic parametric model is well-suited for

the data.

3.3.2 Reliability Metric

The term "reliability" in this dissertation is similar to the reliability of a machine (and its

parts) whose performance is monitored for signs of failure. The concepts of “remaining useful

life” and “time to failure” can be thought of as “effectiveness of the immobilization device” and

Page 65: An Automated Diagnostic Tool for Predicting Anatomical Response to

52

“closeness to crossover.” In the diagnostic part of the IGRT tool, correlation of the data was

discussed as a reliability measure after identifying patients who would need a new mask.

Because the prognostic part of the tool involves making a prediction as to when a new mask or

treatment plan is required, another measure, the "closeness to crossover" was combined with

correlation to provide confidence or some reliability for the time predictions. An example of how

this extra measure was calculated will be discussed later in this section.

The effectiveness of the immobilization device is affected by great reductions in cross-

sectional measurements. For example, at the beginning of a patient's treatment a new mask is

made, and it is molded specifically for that patient. It is assumed that this mask will remain tight

and fitted on the patient's head each day, however if anatomical changes or weight loss occur, the

mask could become loose enough that the patient is able to move their chin and/or head around

inside the mask. This is undesirable for two reasons:

1. All patients have marks put on their skin so that an alignment can be made using the

stationary in-room lasers. For head and neck patients, these marks are placed on the

outside of the mask, which also are used for patient setup.

2. Radiation is being delivered at the head, near critical structures such as the brain,

eyes, nerves, and the spinal cord. Once the mask is no longer immobilizing the

patient, a new one should be made in order to avoid irradiating these structures.

The remaining useful life or the effectiveness of the mask is assumed to be 100% at the

beginning of treatment. This means that the mask is correctly made and is tightly fitted on the

Page 66: An Automated Diagnostic Tool for Predicting Anatomical Response to

53

patients head to prevent movement. However, if significant changes in cross-sectional

measurements occur, then the effectiveness, and therefore the “remaining life” of the mask

decrease as well. An effectiveness value is not used as a reliability measure of this system,

although it is directly related to the closeness to crossover value which is used to determine

reliability of a prediction.

As discussed previously, the regression coefficients for the trimmed lines were calculated

and then these were the basis for a prediction. Also, the correlation coefficients of the

measurements proved to be an effective measure of reliability. However, because true error can

not be assessed during the course of a patient's treatment, the "closeness to crossover" value can

be used as part of the reliability metric.

The “closeness to crossover” value is actually just the answer the following question:

How close in proximity are the cross-sectional measurements to changing by more than 0.5 cm?

It can be thought of as the used “life” of the facemask, where the values range from 0 to 1, with 0

signifying that the mask has 100% of its “life” left and 1 meaning no “life” left. It is calculated

according to Equation 3.7:

Closeness to Crossover =

C

C

t

tt1 (3.7)

where Ct

is the predicted time of crossover and t is the time at which a prediction is being made

(the current time). To clarify, the extrapolation model is given 15 observations (starting at

observation 1) for each of the four measurements. After the 15th observation, the model predicts

a crossover time (for each measurement) based on the change in measurement for the first 15

Page 67: An Automated Diagnostic Tool for Predicting Anatomical Response to

54

observations, and recall that the prediction is made by extrapolating out the regression

coefficients. When the change in measurement is greater than 0.5 cm, then the model finds the

point in time where this happens, or Ct

. The predicted “closeness to crossover” value is then

calculated by plugging in Ct

and the current time point (t) into Equation 3.6. Consider Figure 15,

an example for just one of the four measurements for Patient 8.

The model has predicted that the measurement will have changed by 0.5 cm at Ct

= 15.

This can be seen because the measurement taken at the first time step is 7.265, so (7.265 – 0.5)

cm = 6.765. The first value to be equal to or less than this value is 6.742 cm, located at time 15.

The “closeness to crossover” is calculated for each real time step t, where a value of 1 indicates a

predicted threshold crossover. A new crossover value is calculated every time an observation is

added, so a “closeness to crossover” value can be found for any time step through the course of

treatment.

Because the correlation coefficients were already proved as a valid measure of reliability,

closeness to crossover values were averaged with the correlation values, which resulted in a

value for overall reliability. Figure 16 illustrates how the correlations of the data as well as the

closeness to crossover values are quite similar and follow the same trend. The x-axis is the

treatment number and the y-axis is a scale from -1 to 1, where at zero, neither the data is very

correlated nor are the patient’s measurements close to crossing the 0.5 cm threshold. But, one

can see that both increase together. The closeness to crossover is a useful addition to reporting

reliability because it is possible for the data to be highly correlated without being close to

Page 68: An Automated Diagnostic Tool for Predicting Anatomical Response to

55

crossing the 0.5 cm threshold. The next section will discuss the results from the diagnostic and

prognostic parts of the IGRT tool.

Page 69: An Automated Diagnostic Tool for Predicting Anatomical Response to

56

1st 10 Observations Change in Measurement

10

9

8

7

6

5

4

3

2

1

156.7

208.7

251.7

283.7

305.7

317.7

319.7

311.7

293.7

265.7

110.0

057.0

015.0

018.0

040.0

052.0

054.0

046.0

028.0

00.0

Next 10 Predicted Values “Closeness to Crossover”

20

19

18

17

16

15

14

13

12

11

076.6

229.6

373.6

506.6

629.6

742.6

845.6

938.6

020.7

093.7

1 -

15/)1515(

.

.15/)1215(

15/)1115(

=

0.1

.

.

80.0

733.0

Figure 15: Illustration of calculating the Closeness to Crossover value

Page 70: An Automated Diagnostic Tool for Predicting Anatomical Response to

57

Figure 16: Correlation vs. Closeness to Crossover

5 10 15 20 25 30-1

-0.5

0

0.5

1Measurement 1

Closeness to Crossover

Correlation

5 10 15 20 25 30-1

-0.5

0

0.5

1Measurement 2

5 10 15 20 25 30-1

-0.5

0

0.5

1Measurement 3

5 10 15 20 25 30-1

-0.5

0

0.5

1Measurement 4

Treatment Number

Page 71: An Automated Diagnostic Tool for Predicting Anatomical Response to

58

4 RESULTS

The next section will present the results from the two components of the IGRT tool. The

diagnostic results identify those patients who will require adaptive therapy (i.e. a new facemask

or new treatment plan), and the prediction results relay the point in time when the cross-sectional

measurement data crosses the threshold of the physician-specified criteria. Because the goal of

this tool is to evaluate new patients for whom no previous data exists, these predictions start after

the patient has received 15 treatments and then continual predictions after each proceeding

treatment. This process ends after a reliable time-crossing prediction is made.

4.1 DIAGNOSIS The first set of results identifies (with some reliability) which patients’ measurements

will cross the 0.5 cm threshold at some point in the future. As discussed previously, it is

necessary to evaluate each measurement individually because the physician is interested in

knowing if any of the four measurements change by more than 0.5 cm. It would be helpful to

diagnose patients as early as possible while still being somewhat sure, therefore a reliability

value will be given after each of the diagnoses.

4.1.1 Data Set Selection

Before displaying the results from the diagnostic process, it is first necessary to address

the isssue of data selection for this part of the tool. As mentioned previously, demographic

information was collected and evaluated. However, the data were very random and not

Page 72: An Automated Diagnostic Tool for Predicting Anatomical Response to

59

standardized. To clarify, the data were collected by reading through each patient's chart in an

attempt to find out as much demographic information as possible. However, it was found that

patient charts are often unorganized and it impossible to ensure that all demographic information

was completely documented.

One example of this was that some patients’ weights were not recorded consistently and

were also only taken once a week at most. Often, there would be no record of beginning weight

or the first record would occur after the first 10 treatments. Therefore, weight loss could possibly

be an indicator that a patient will require adaptive therapy, but having inconsistent and missing

data for this work proved the opposite. The rest of the demographic information proved to be

unhelpful as well. It is believed that relying on this type of data would only be possible for

patients who were under a clinical trial where all variables are under strict control, whereas the

patients in this study were just chosen at random for evaluation. Therefore, only the cross-

sectional measurement data was used for training and testing.

Another data set selection issue was also mentioned previously, and that is the one of

how many models should there exist for each patient. All four lateral measurements were plotted

for C1-C4. Figure 16 shows the four measurements taken at the C3 level for one of the patients.

It is apparent that there is a wide range in physical measurement, from almost 8 cm to 6.5 cm.

Also, after visual inspection of the data, each vertebra's measurements did not seem to follow the

same trend always. Therefore, instead of using the 4 lateral measurements for each vertebra as

data for the model, all of the "ones", "twos", "threes", and "fours" were pulled out for each

vertebra, and these were the data that were used for modeling.

Page 73: An Automated Diagnostic Tool for Predicting Anatomical Response to

60

Another advantage to taking this approach is that the model is given 4 measurements that

should all have about the same trend, therefore having some redundant information provides for

robust modeling and a better chance for making an accurate diagnosis/prediction. Also,

symmetry of measurement change can be visually evaluated. For example, if meaurements 1 and

2 (which are on the right side of the face) are changing rapidly and measurements 3 and 4 are

changing slowly, one could assume that this was the result of asymmetric anatomical response,

whereas if all 4 measurements are changing rapidly, one could identify that patient as having a

symmetric anatomical response, which would most likely be attributed to overall weight loss.

Distinguishing between tumor response and overall weight loss was not one of the major

contributions in this work, however the tool does allow for plotting any of the data for any of the

patients’ measurements. Therefore, the physician could visually inspect the plots if he/she were

interested in knowing what type of anatomical response is occurring.

4.1.2 Diagnosing ART Candidates

Two methods, trend detection and change in measurement, were previously discussed as

possible ways to identify patients who will require adaptive therapy in the future. The results

from these two methods will now be discussed.

Trend Detection

The Mann-Kendall test for trend detection worked. However, sometimes when a trend

was detected, it was not very significant in terms of a patient requiring ART. As stated before,

most patients will undergo some kind of anatomical change over the course of therapy, and so

Page 74: An Automated Diagnostic Tool for Predicting Anatomical Response to

61

the MK test did return accurate results because it simply tests for a decreasing trend. Therefore,

the results were visually inspected and a “professional decision” was made as to which patients

were considered abnormal or normal, using a threshold criteria of a 0.5 cm change over the entire

course of treatment. As a note, each of the four measurements was kept with their respective

vertebra when evaluating trends.

Patients were classified as either normal or abnormal; where abnormal were those whose

measurements crossed the 0.5 cm threshold and the normal did not. Figure 17 shows an example

of one patient’s cross-sectional measurements over the course of 32 treatments, and Figure 18

shows a different patient’s measurements over the course of 45 treatments. The first figure

visibly shows a significant decreasing trend over the course of treatment and would be

considered as abnormal. However the second figure only shows a slight decrease over time, so

this would be normal change in measurement, meaning not enough change for a patient to

require adaptive therapy. As mentioned before, the MK test can not provide qualitative results

about the magnitude with which a trend is occurring, only if the data is “trendy” or not.

Therefore using calculated changes in measurement was the method used for diagnosis.

Page 75: An Automated Diagnostic Tool for Predicting Anatomical Response to

62

0 5 10 15 20 25 30 354.5

5

5.5

6

6.5

7

7.5

8

8.5

Fraction Number

Cro

ss-S

ectio

nal M

easu

rem

ent(

cm)

Measurement 1

Measurement 2

Measurement 3

Measurement 4

Figure 17: Significant decreasing trend for Patient 14

0 5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

9

9.5

Fraction Number

Cro

ss S

ectio

nal M

easu

rem

ent(

cm)

Measurement 1

Measurement 2

Measurement 3

Measurement 4

Figure 18: Slightly decreasing trend for Patient 24

Page 76: An Automated Diagnostic Tool for Predicting Anatomical Response to

63

To reiterate the method, 4 different lateral measurements were taken at the C1-C4

vertebral levels and then each measurement was taken out from its respective vertebra level. The

result is having all of the "ones", "twos","threes", and "fours" put together to create 4 different

models for each patient.

Change in Measurement

The changes in cross-sectional measurement were first calculated by starting with the

first 15 observations and extrapolating out the trimmed mean lines (i.e. each of the four

measurements). A prediction is made about whether or not we think any of the measurements

will cross the 0.5 cm threshold. Also, the average correlation coefficients (which are squared and

do not include any negatively correlated values) are reported, giving a measure of the prediction

reliability.

All true patient data was tested to see which patient’s (and each one specifically)

measurements crossed the 0.5 cm threshold. Tables 2-4 show the results from that testing,

including the correlation values for the data, and all of the data from these tables (excluding

correlation values) were plotted for each patient and can be found in Appendix B. For each

measurement (1-4) the true crossing point is listed along with how well the data was correlated.

Another important value to note is the total number of treatments. For example, Measurement 1

for Patient 12 shows a true crossing time at 46/46 treatments, and the correlation of the data has a

value of 0.955. In the prognostic part of the tool, if the model predicts that the measurement

never crosses the threshold, there would be some error associated with that prediction; however

the true point of crossing was at the end of treatment. Therefore, the total number of treatments

Page 77: An Automated Diagnostic Tool for Predicting Anatomical Response to

64

should be noted when evaluating the performance of the prediction model, which will be

presented in the next part of this section.

Page 78: An Automated Diagnostic Tool for Predicting Anatomical Response to

65

Table 2: Crossover data for Patients 1-10

Patient Number

Measurement Number

Crossover Point (Treatment Number)

Total Number of Treatments

Reliability Value

Reliable (Y/M/N)

1

1 13

40

0.885 Y 2 15 0.990 Y 3 26 0.948 Y 4 11 0.962 Y

2

1 13

44

0.922 Y 2 22 0.995 Y 3 39 0.985 Y 4 25 0.851 Y

3

1 -

38

0.838 Y 2 - 0.910 Y 3 - 0.919 Y 4 33 0.971 Y

4

1 -

33

0.883 Y 2 32 0.441 M 3 - 0.891 Y 4 - 0.959 Y

5

1 -

39

0.279 N 2 - 0.234 N 3 - 0.858 Y 4 - 0.760 Y

6

1 -

36

0.920 Y 2 - 0.720 Y 3 - 0.950 Y 4 - 0.556 M

7

1 7

34

0.986 Y 2 11 0.963 Y 3 28 0.806 Y 4 9 0.924 Y

8

1 27

32

0.955 Y 2 26 0.988 Y 3 17 0.981 Y 4 20 0.984 Y

9

1 -

42

0.568 M 2 36 0.991 Y 3 32 0.914 Y 4 - 0.502 M

10

1 8

30

0.848 Y 2 - 0.859 Y 3 - 0.893 Y 4 8 0.570 M

Page 79: An Automated Diagnostic Tool for Predicting Anatomical Response to

66

Table 3: Crossover data for Patients 11-20

Patient Number

Measurement Number

Crossover Point (Treatment Number)

Total Number of Treatments

Reliability Value

Reliable (Y/M/N)

11

1 21

39

0.981 Y 2 - 0.652 Y 3 30 0.985 Y 4 22 0.960 Y

12

1 46

46

0.955 Y 2 42 0.997 Y 3 - 0.821 Y 4 - 0.746 Y

13

1 -

42

0.838 Y 2 22 0.974 Y 3 16 0.856 Y 4 15 0.711 Y

14

1 7

38

0.998 Y 2 12 0.995 Y 3 14 0.971 Y 4 8 0.958 Y

15

1 14

37

0.995 Y 2 20 0.994 Y 3 13 0.992 Y 4 11 0.990 Y

16

1 35

35

0.792 Y 2 - 0.560 M 3 21 0.720 Y 4 10 0.978 Y

17

1 43

46

0.729 Y 2 16 0.611 Y 3 20 0.503 M 4 32 0.941 Y

18

1 -

51

0.833 Y 2 - 0.402 M 3 - 0.501 M 4 - 0.559 M

19

1 20

40

0.950 Y 2 - 0.465 M 3 - 0.892 Y 4 - 0.414 M

20

1 13

33

0.989 Y 2 21 0.986 Y 3 23 0.930 Y 4 31 0.942 Y

Page 80: An Automated Diagnostic Tool for Predicting Anatomical Response to

67

Table 4: Crossover data for Patients 21-30

Patient Number

Measurement Number

Crossover Point (Treatment Number)

Total Number of Treatments

Reliability Value

Reliable (Y/M/N)

21

1 31

40

0.939 Y 2 38 0.937 Y 3 36 0.915 Y 4 24 0.970 Y

22

1 28

38

0.953 Y 2 14 0.995 Y 3 20 0.981 Y 4 25 0.849 Y

23

1 10

29

0.698 Y 2 13 0.575 M 3 - 0.939 Y 4 - 0.845 Y

24

1 34

41

0.967 Y 2 - 0.938 Y 3 38 0.545 M 4 - 0.942 Y

25

1 -

38

0.881 Y 2 - 0.356 M 3 - 0.555 M 4 - 0.450 M

26

1 -

39

0.328 N 2 - 0.903 Y 3 - 0.427 M 4 - 0.349 M

27

1 -

47

0.704 Y 2 - 0.920 Y 3 - 0.778 Y 4 - 0.994 Y

28

1 -

45

0.902 Y 2 37 0.978 Y 3 - 0.593 M 4 17 0.910 Y

29

1 -

41

0.473 M 2 - 0.900 Y 3 - 0.367 M 4 16 0.986 Y

30

1 -

46

0.602 Y 2 - 0.977 Y 3 - 0.448 M 4 - 0.653 Y

Page 81: An Automated Diagnostic Tool for Predicting Anatomical Response to

68

Table 5 shows the results for diagnosing those patients who would have benefited from

adaptive therapy, whether it was receiving a new facemask or a new treatment plan. In order to

make a diagnosis, the first 15 observations were given. This is an assumption that was made due

to the fact that all 30 patients underwent an average of 39 treatments (a total of 1180). It would

be desirable to make a diagnosis early in treatment, so beginning with 15 observations seemed

feasible since this is just below half of the average number of treatments. As a note, because

there are 4 measurements per patient, and each measurement was modeled individually, there are

actually 120 total diagnoses being made.

There are two different results for the diagnostic process. The first is those patients who

will require a new mask at some point during their treatment. The table below shows which

patients were believed to need adaptive therapy at some point in their treatment given the first 15

observations, indicated by Y/N. Secondly, if a decision could not be made in the first 15

observations (indicated by a "?"), then the point at which a reliable diagnosis could be made is

shown as the treatment number/total number of treatments. There were only 3 measurements,

shown as an X in a RED box, which never could produce a reliable diagnosis.

Page 82: An Automated Diagnostic Tool for Predicting Anatomical Response to

69

Table 5: Diagnosis of ART candidates

Patient Number

Meas. Number

Cross 0.5 cm after

15 tx (Y/N)

Tx when

reliable

Patient Number

Meas. Number

Cross 0.5 cm after 15 tx (Y/N)

Tx when

reliable

Patient Number

Meas. Number

Cross 0.5 cm after

15 tx (Y/N)

Tx when

reliable

1

1 Y

11

1 Y

21

1 ? 29/40 2 Y 2 ? 32/39 2 N 3 Y 3 ? 24/39 3 N 4 ? 20/40 4 ? 17/39 4 ? 19/40

2

1 Y

12

1 ? 16/46

22

1 ? 25/38 2 Y 2 ? 18/46 2 Y 3 Y 3 ? 19/46 3 Y 4 ? 33/44 4 ? 30/46 4 ? 25/38

3

1 N

13

1 N

23

1 Y 2 N 2 ? 16/42 2 Y 3 N 3 ? 38/42 3 N 4 N 4 ? 23/42 4 N

4

1 ? 23/38

14

1 Y

24

1 ? X

2 ? 32/38 2 Y 2 N 3 N 3 ? X 3 N 4 N 4 ? 20/38 4 N

5

1 N

15

1 Y

25

1 ? 30/38 2 N 2 Y 2 ? 25/38 3 N 3 Y 3 N 4 N 4 Y 4 ? 25/38

6

1 N

16

1 N

26

1 ? 20/39 2 N 2 N 2 ? 20/39 3 ? 17/36 3 ? 17/35 3 ? 20/39 4 ? 20/36 4 Y 4 N

Page 83: An Automated Diagnostic Tool for Predicting Anatomical Response to

70

7

1 Y

17

1 ? 25/46

27

1 ? 16/47 2 Y 2 Y 2 ? 18/47 3 ? 25/34 3 ? 23/46 3 ? 16/47 4 Y 4 Y 4 ? 16/47

8

1 Y

18

1 ? 26/51

28

1 N 2 ? 26/32 2 N 2 ? 21/45 3 Y 3 N 3 ? 28/45 4 ? 20/32 4 N 4 Y

9

1 ? 25/42

19

1 ? 27/40

29

1 N 2 ? 20/42 2 ? 20/40 2 ? X 3 ? 37/42 3 N 3 ? 26/41 4 N 4 N 4 Y

10

1 Y

20

1 Y

30

1 ? 27/46 2 ? 19/30 2 Y 2 ? 21/46 3 ? 17/30 3 ? 30/33 3 ? 18/46 4 Y 4 ? 20/33 4 ? 19/46

Page 84: An Automated Diagnostic Tool for Predicting Anatomical Response to

71

The diagnosis results show that approximately half (61/120) of the measurements

predicted that patients would need ART within the first 15 observations and 28/120 required 20

observations to make a reliable diagnosis. Therefore, 74% of patients' measurements accurately

diagnosed patients would need adaptive therapy within the first 20 observations. While it does

take 20 observations to have fairly accurate results, it is important to remember that the average

number of treatments that all 30 head and neck patients underwent was 39 treatments.

As mentioned previously, 11 of the 30 patients clearly did not require adaptive therapy

(indicated by dashed lines in Tables 2-4), while 4 were questionable, meaning the crossover

points were very close to the end of their treatment. Therefore, half of the patients more than

likely needed to have a new facemask constructed and/or the development of a new treatment

plan based on the change in any of their four cross-sectional measurements.

4.2 PREDICTION

Obviously early on in treatment, a prediction could be unstable or inaccurate, but as more

observations are added there is more information used to make a prediction. There were a few

questions about the reliability of the data, because as mentioned before, some patients had a

significant amount of data missing from their data sets. Appendix B contains a table which

shows the percentages of data that were missing and then corrected for by linear interpolation.

From the table it can be seen that 9 patients' data consisted of more than 25% interpolated data,

and the reliability in these patient's measurements could potentially reduce the reliability of a

prediction.

Page 85: An Automated Diagnostic Tool for Predicting Anatomical Response to

72

As discussed before, correlation information was used as part of the reliability measure

for diagnosis, because it was found that percent error decreased as the correlation of the data

increased, and the next figures just illustrate this result. Average percent error and the correlation

of the data were tested for five of the thirty data sets. Figures 19 and 20 show the results from

two of the cases. It can be seen that, in general, as the correlation of the measurements goes up,

the percent error decreases.

Page 86: An Automated Diagnostic Tool for Predicting Anatomical Response to

73

Figure 19: Correlation vs. Percent Error for Case #1

5 10 15 20 25 30-50

0

50

100

150Measurement 1

Percent Error

Correlation

5 10 15 20 25 30-50

0

50

100

150Measurement 2

5 10 15 20 25 30-50

0

50

100

150Measurement 3

5 10 15 20 25 30-50

0

50

100

150Measurement 4

Treatment Number

Page 87: An Automated Diagnostic Tool for Predicting Anatomical Response to

74

Figure 20: Correlation vs. Percent Error for Case #2

5 10 15 20 25 30 35 40-50

0

50

100

150Measurement 1

Percent Error

Correlation

5 10 15 20 25 30 35 40-50

0

50

100

150Measurement 2

5 10 15 20 25 30 35 40-50

0

50

100

150Measurement 3

5 10 15 20 25 30 35 40-50

0

50

100

150Measurement 4

Treatment Number

Page 88: An Automated Diagnostic Tool for Predicting Anatomical Response to

75

4.2.1 Kernel Classification

Once a database of pre-classified patients is formed, measurements for any new patient

can be taken and used to determine if they would be a candidate for ART. For the predictions,

data from 30 patients were included and a leave-one-out cross validation method was used. The

kernel classification model was able to accurately predict those patients whose assumed true

category (class) was abnormal. For the assumed normal cases, 13 were identified as normal and

2 were identified as abnormal. Table 6 illustrates the results in a confusion matrix.

Table 6: Kernel Classification prediction results

15 15 Total

15 2 Abnormal

0 13 Normal

Abnormal Normal

True Category

Page 89: An Automated Diagnostic Tool for Predicting Anatomical Response to

76

The problems with the kernel classification model are twofold:

1. The database of pre-classified patients was formed by visually inspecting the trend of

patients’ measurements (at each vertebral level) and attempting to classify them by a

“professional opinion.” This was, as mentioned before, because the results from the

Mann-Kendall test reported that all of the data was “trendy” and therefore that all

patients should be classified as abnormal. Because this was known to be false, a

professional decision was made as to which patients were classified as abnormal or

normal.

2. The data used for the memory matrix should have been separated out by measurement

rather than all for each vertebra. Also, the rate of change would have been more

helpful for making a prediction than just using the measurements themselves.

This non-parametric classification technique is useful in industry problems because

relationships of data can be learned and predicted in the future. However, the problem presented

in this dissertation involves human data, which often exhibit unexplainable trends and

unpredictable results. Because the behavior of the data is very particular for a given patient, it

was decided that the employed prediction method should be specific for each new patient rather

than try to make a prediction based on historical patient data.

Page 90: An Automated Diagnostic Tool for Predicting Anatomical Response to

77

4.2.2 Extrapolation Model

As explained in the methodology section, the basis for the extrapolation model is to take

the first few observations of a patient’s measurements and extrapolate out (using regression

coefficients from the prediction line thus far) to make a prediction about what point in time the

0.5 cm threshold will be crossed. So in the beginning, if the model is given 15 observations, a

time prediction is made, and a new time prediction and reliability value are then given each time

an observation is added.

For this portion of the tool, an "accurate" prediction was determined by evaluating the

variation after a series of time predictions. For example, assume for times 10-15 that the

predictions for crossing were at treatments 12, 13, 14, 13, and 14, respectively. The variation in

these numbers is very low, so the standard deviation for every "last 3 predictions" was used as an

indicator as to how "sure" one is that a crossover (at the 0.5 cm threshold) is about to occur. If

the variation (i.e. standard deviation) in the "last 3 predictions" is less than 3, then the correlation

value and the closeness to crossover value are averaged at that point and reported as the measure

of reliability. If the reliability at that point is high, meaning greater than 0.5, then the predicted

crossover time at that point is assumed to be the correct crossover time. However, if the

reliability value is less than 0.5, the model continues on with predictions until a reliable

crossover time is found.

Page 91: An Automated Diagnostic Tool for Predicting Anatomical Response to

78

Table 7 only displays results for the 15 patients who should have received adaptive

therapy and how accurately a predicted crossover time was reported after being given the first 15

observations. As mentioned before, reliability measures greater than 0.5 are considered as being

reliable.

Table 7: Reliability for patients requiring ART

Patient Number

Measurement Number

Timely Prediction

Reliability Value

Reliable (Y/N)

1

1 * 0.66 Y

2 * 0.52 Y

3 * 0.37 N

4

2

1 * 0.61 Y

2 * 0.54 Y

3 * 0.34 N

4

7

1 * 0.82 Y

2 * 0.76 Y

3

4 * 0.88 Y

8

1 * 0.76 Y

2

3 * 0.44 N

4

10

1 * 0.57 Y

2

3

4 * 0.99 Y

11

1 * 0.54 Y

2

3

4

13

1 * 0.93 Y

2

3

4

Page 92: An Automated Diagnostic Tool for Predicting Anatomical Response to

79

14

1 * 0.80 Y

2 * 0.75 Y

3

4

15

1 * 0.89 Y

2 * 0.76 Y

3 * 0.95 Y

4 * 0.88 Y

16

1 * 0.27 N

2 * 0.29 N

3

4

17

1

2 * 0.55 Y

3

4 * 0.62 Y

20

1 * 1.00 Y

2 * 0.61 Y

3

4

22

1

2 * 0.46 N

3 * 0.62 Y

4

23

1 * 0.83 Y

2 * 0.53 Y

3 * 0.36 N

4 * 0.29 N

28

1 * 0.55 Y

2

3

4 * 0.92 Y

Page 93: An Automated Diagnostic Tool for Predicting Anatomical Response to

80

Because the most important result from the IGRT tool is the time prediction, it is

necessary to evaluate the accuracy, and more specifically in terms of the amount of “lead time”

after making that prediction. For example, suppose at treatment number 10, measurement “one”

(for patient “x”) predicts that the 0.5 cm threshold will be crossed at treatment number 15, given

a reliability value of greater than 0.5. If the measurement however, does not cross until treatment

20, then the staff will essentially have 5 days of “lead time” for preparation. Therefore, it is

desirable to give as much lead time as possible.

The convenient aspect of the tool is that predictions (and reliability values) are

continuously generated for each measurement throughout the course of a patient’s treatment. In

other words, if the model reliably predicts early on (say at treatment 8) that a given measurement

will cross at treatment 15 and then at treatment 15 the measurement has not quite crossed, new

predictions can be made until the variation in time predictions slows. As the variation in time

predictions decreases, the predicted closeness to crossover should increase, thus increasing the

reliability in each prediction.

Tables 8-10 show the prediction results with reliabilities of 0.5, 0.6, and 0.7, respectively.

The results are given in (a) confusion matrices, where the number of predicted and true

crossovers can be seen. Also, the accuracy with which a prediction is made is given for each

class, crossover or no crossover, as well as an overall measure of accuracy. The second part of

the table (b) illustrates the prediction windows, where the average number of days to make a

prediction is given. Another important piece of information is how many days in advance one is

notified that a patient will require adaptive therapy. As mentioned before, this is the key result

for being able to know how much time will be available to pre-allocate clinical time and

Page 94: An Automated Diagnostic Tool for Predicting Anatomical Response to

81

resources. Therefore, the time was split into: more than 5 days, 1-4 days, and no lead time (0

days).

For example, Table 8 (b) shows that it takes an average of 11 days to make predictions

with a reliability value of at least 0.5. Also, 57% of predictions were given with at least 5 days to

make preparations, 14% with 1-4 days, and 29% with no lead time. Obviously, it is desirable to

know more than 0 days before a patient will require ART. So, Tables 9 and 10 show the results

for making predictions with higher reliability values (0.6 and 0.7). It can be seen that both have

similar results, where a prediction is made (more than 80% of the time) knowing at least 5 days

ahead of time. This means that, perhaps, a reliability of 0.65 should be used when reporting

"highly reliable" predictions. The tables show that, while it does require more days to make a

higher prediction, the amount of significant “lead time” (i.e. more than 5 days ahead) increases

from 57% to 86% with 7 extra observations. This result suggests that predictions should continue

to be made (even after a reliability value of 0.5 is found) until a reliability of close to 0.7 is

given.

Page 95: An Automated Diagnostic Tool for Predicting Anatomical Response to

82

Table 8: Prediction results for a reliability > 0.5

Reliability > 0.5

Prediction Window

PREDICTED

Class Accuracy (%)

Overall Accuracy (%)

11 obs.

Average number of observations to make a prediction

Crossover No Crossover

57.0% 5 treatments ahead of actual

crossover

TR

UE

Crossover 30 18 62.5%

63.0%

14.0%

1-4 treatments ahead of actual crossover

No Crossover 6 6 50.0%

29.0% No Lead Time (0 treatments ahead)

a) Confusion Matrix b) Prediction windows

Page 96: An Automated Diagnostic Tool for Predicting Anatomical Response to

83

Table 9: Prediction results for a reliability > 0.6

Reliability > 0.6

Prediction Window

PREDICTED

Class Accuracy (%)

Overall Accuracy (%)

16 obs.

Average number of observations to make a prediction

Crossover No Crossover

90.0% 5 treatments ahead of actual

crossover

TR

UE

Crossover 31 17 64.6%

65.1%

0.0%

1-4 treatments ahead of actual crossover

No Crossover 6 6 50.0%

10.0% No Lead Time (0 treatments ahead)

a) Confusion Matrix b) Prediction windows

Page 97: An Automated Diagnostic Tool for Predicting Anatomical Response to

84

Table 10: Prediction results for a reliability > 0.7

Reliability > 0.7

Prediction Window

PREDICTED

Class Accuracy (%)

Overall Accuracy (%)

18 obs.

Average number of observations to make a prediction

Crossover No Crossover

82.6% 5 treatments ahead of actual

crossover

TR

UE

Crossover 41 7 85.4%

86.0%

8.7%

2-4 treatments ahead of actual crossover

No Crossover 9 3 60.0%

8.7% No Lead Time (0 treatments ahead)

a) Confusion Matrix b) Prediction windows

Page 98: An Automated Diagnostic Tool for Predicting Anatomical Response to

85

The next series of figures (Figures 21-28) shows the results for Patient 11. They illustrate

measurements 1-4 being predicted after 10 and then 15 observations. Each figure contains 3

subplots, where the first illustrates the prediction line, predicted crossover point, and true

crossover point. The x-axis for all subplots is the number of treatments and the y-axis for the first

is the cross-sectional measurement value in centimeters.

The second subplot shows the error with which the prediction is made along with the true

closeness to crossover, which was calculated knowing the true point of crossover. For this

subplot, the y-axis ranges from 0 to 1. As discussed previously, the closeness to crossover values

range from 0 to 1, where values approaching 1 indicate that a crossover is about to occur. The

error was calculated as a percent; however it was normalized on a scale of 0 to 1 in order to

compare it to the crossover values. Therefore an error value of 1 really indicates 100% error.

The last subplot shows the predicted value versus the true value of closeness to crossover.

Once again, the y-axis is a scale from 0 to 1, where a crossover, be it predicted or true, equals a

value of 1. The second and third subplots can be interpreted together, where the predicted

closeness to crossover should be indirectly proportional to the error associated with that

prediction. The true closeness to crossover values are just plotted to illustrate how accurate the

model is making a prediction over time, because obviously when a new set of data are presented,

the true crossover point is unknown.

Figures 20 and 21 shows the results from a prediction made after 10 and then 15

observations, respectively. In Figure 20, subplot 1, it can be seen that the predicted line crosses

the 0.5 cm threshold at treatment number 20, indicated as RED diamonds, whereas the true

crossover value is shown as BLUE x's, at treatment 21. In subplot 2, it can be seen that the true

Page 99: An Automated Diagnostic Tool for Predicting Anatomical Response to

86

closeness to crossover value is increasing over time, yet not quite to 1 which means that no

crossover has actually occurred. Also, the error starts out high (as expected) and then decreases

as a prediction comes closer to the actual crossover point.

Figure 21: Measurement 1 prediction after 10 observations

Page 100: An Automated Diagnostic Tool for Predicting Anatomical Response to

87

Figure 22: Measurement 1 prediction after 15 observations

Figure 23: Measurement 2 prediction after 10 observations

Page 101: An Automated Diagnostic Tool for Predicting Anatomical Response to

88

Figure 24: Measurement 2 prediction after 15 observations

Figure 25: Measurement 3 prediction after 10 observations

Page 102: An Automated Diagnostic Tool for Predicting Anatomical Response to

89

Figure 26: Measurement 3 prediction after 15 observations

Figure 27: Measurement 4 prediction after 10 observations

Page 103: An Automated Diagnostic Tool for Predicting Anatomical Response to

90

Figure 28: Measurement 4 prediction after 15 observations

At this point it is necessary to point out one important aspect of the modeling process.

Although predictions can be made every time an observation is added into the data set, if an

accurate crossover prediction is made for a certain measurement, then there is essentially no need

to continue adding in additional data. At that point, the information would be given to the

physician so that the patient could be re-evaluated for the need to construct a new facemask or

create a new treatment plan.

The analysis of the cross-sectional measurement data provided interesting results related

to which patients should have had a new facemask made and the ones which actually had a new

mask made during the course of treatment. Patients 11 and 17 were the only 2 patients who had

new masks made, while analysis of the data shows that approximately 15 of the patients

Page 104: An Automated Diagnostic Tool for Predicting Anatomical Response to

91

(including 11 and 17) actually needed some form of adaptive therapy, and likely a new

facemask. Figure 29 shows the fused images for Patient 17 right before a new mask was

constructed, and can be used to validate the results for this patient.

It can be seen in the figure that the areas of the patient's face that show anatomical

response correspond with the location with which Measurements 2 and 3 were taken. Going back

to Table 7, we can see that an accurate prediction was made for both measurements within 15

observations and slightly more for measurements 1 and 4. This means that after 15 observations,

the physician could have been notified that a change greater than 0.5 cm would occur around

treatment 25.

Page 105: An Automated Diagnostic Tool for Predicting Anatomical Response to

92

Figure 29: Illustration of anatomical changes for Patient 17 after 25 treatments

2 3

Page 106: An Automated Diagnostic Tool for Predicting Anatomical Response to

93

5 CONCLUSIONS

The focus of this dissertation was to use cross-sectional measurement data collected from

archived CT scans to create a tool for determining which patients will require and benefit from

adaptive radiation therapy, specifically for the purpose of creating a new facemask and/or a new

radiation therapy treatment plan.

It has been shown that many head and neck cancer patients undergo significant

anatomical changes which might make them a candidate for adaptive radiation therapy. The

facemask is an important part of ensuring accurate and reproducible radiation therapy treatments,

therefore it is imperative that ineffective masks be identified and replaced. Also, significant

changes in cross-sectional measurements could cause normal structures to shift into the treatment

field and result in escalated doses to health tissue. This could result in an inaccurate treatment

delivery; therefore a patient might require a new treatment plan along with a new facemask.

The task of identifying those patients who would benefit from ART was achieved by

collecting cross-sectional measurements of 30 patients' face and neck regions, and using the

newly developed two-part IGRT to identify those patients who would need a new mask and/or

treatment plan. Also, a time prediction was made as to when this was required so that the clinical

staff can allocate time and resources to prepare for adaptation of a patient’s treatment course.

The cross-sectional measurement data is truly an indicator for patients requiring new masks,

however they are also valuable in identifying those patients who might need a new treatment

plan with or without a new facemask.

Page 107: An Automated Diagnostic Tool for Predicting Anatomical Response to

94

Fifteen H&N patients were identified as ones who should have had a new facemask

constructed. The reality is that only 2 patients received new masks, and the delayed toxicities

experienced by the other patients could have possibly been prevented with the construction of a

new mask. The cross-sectional data proved to be an accurate representation of the anatomical

changes that each patient underwent. In other words, the predicted change in cross-sectional

measurement (for all 120 measurements) was used to correctly diagnose 74% of the

measurements that would require a patient to require adaptive therapy. Also, the prediction

results for those correctly diagnosed show that an average of 11 days are needed to ensure a

reliability of 0.5, while an even more reliable prediction can be made (reliability = 0.7) if 18

observations are initially supplied.

5.1 RECOMMENDATIONS FOR FUTURE WORK The focus of this dissertation was on the development of a tool which will aid the

oncologists in deciding which H&N patients will require adaptive radiation therapy. This is

needed because it will allow for the allocation of time and effort of the clinic staff to adequately

prepare for a new plan. It can also alert the oncologist as to which patients will require close

monitoring during their course of therapy.

In the future, the proposed methodology could be extended in the following directions:

- Develop a classification scheme by using rates of change of data and then further

develop the proposed modified kernel regression model to predict which patients

will require adaptive radiation therapy. This would be slightly different than the

Page 108: An Automated Diagnostic Tool for Predicting Anatomical Response to

95

work in this dissertation, in that one would actually attempt to make a prediction

based on historical patient data.

- Further evaluate the “abnormal” patients, who are the ones who have undergone

weight loss or tumor response. This will require the utilization of the rate of change

results. For example, if the rate of change is the same for all four lateral

measurements, it could indicate weight loss, whereas an asymmetric rate of change

might point to tumor response. The work done in this dissertation allows for

making the distinction between these two patients. However, the results are

deduced by visually evaluating the plots of the measurement data. More analysis

could involve tabular results and a more complex set of if/then rules.

- Integrate the IGRT tool into software programs that control IGRT machines,

specifically those with MVCBCT capabilities. It is known that MVCBCT is an

excellent imaging source for H&N patients because it can provide for increased

contrast between bone and soft tissue and decreased noise in its images. With these

types of images, the center of each vertebra could be automatically found and then

measurements could be taken by the radiation therapists while the patient is

undergoing treatment. This would provide the medical physicists with the data on a

weekly basis and then they could report to the oncologist as to what type of

anatomical changes (if any) the patient is undergoing.

Page 109: An Automated Diagnostic Tool for Predicting Anatomical Response to

96

REFERENCES

Page 110: An Automated Diagnostic Tool for Predicting Anatomical Response to

97

Allen, XA, Qi, XS, Pitterle, M, Kalakota, K, Mueller, K, Erickson, BA, Wang, D, Shultz, CJ, Firat, SY, Wilson, JF (2007), Interfractional variations in patient setup and anatomic change assessed by daily computed tomography. Int. J. Radiation Oncology Biol. Phys. 68:581-591, 2007

Barker, J, Garden, A, Dong, L, O’Daniel, J, Wang, H Court, L, Morrison, W, Rosenthal, D,

Chao, C, Mohan, R, Ang, K (2003), Radiation-induced changes during fractionated radiotherapy: A pilot study using an integrated CT-LINAC system. I. J. Radiation Oncology Biol. Phys. 57:S304, 2003

Barker, J, Garden, A, Ang, K, O’Daniel, J, Wang, H, Court, L, Morrison, W, Rosenthal, D,

Chao, C, Tucker, S, Mohan, R, Dong, L (2004), Quantification of volumetric and geometric changes occurring during fractionated radiotherapy for head-and-neck cancer using an integrated CT/linear accelerator system. Int. J. Radiation Oncology Biol. Phys. 59: 960-970, 2004

Bel, A, Keus, R, Vijlbrief, RE, Lebesque, JV (1995), Setup deviations in wedged pair irradiation

of parotid gland and tonsillar tumors, measured with an electronic portal imaging device. Radiotherapy and Oncology 37:153-159, 1995

Bentzen, SM (2005), Radiation therapy: Intensity modulated, image guided, biologically

optimized and evidence based. Radiotherapy and Oncology 77:227-230, 2005 Boyer, AL, Antounuk, L, Fenster, A, Van Her, M, Meertens, H, Munro, P, Reinstein, LE, Wong,

J (1992), A review of electronic portal imaging devices (EPIDs). Medical Physics 19:1-16, 1992

Brabbins, D, Martinez, A, Yan, D, Lockman, D, Wallace, M, Gustafson, G, Chen, P, Vicini, F,

Wong, J (2005), A dose-escalation trial with the adaptive radiotherapy process as a delivery system in localized prostate cancer: Analysis of chronic toxicity. Int. J. Radiation Oncology Biol. Phys. 61:400-408, 2005

Cooley, CA and SN MacEachern (1998), “Classification via Kernel Product Estimators”,

Biometrika, Vol. 85, No. 4, Dec. 1998, pp.823-833 Dawson, LA and MB Sharpe (2006), Image-guided radiotherapy: rationale, benefits, and

limitations. Lancet Oncology 7:848-58, 2006 Dawson, LA and DA Jaffray (2007), Advances in Image-Guided Radiation Therapy. Journal of

Clinical Oncology 25:938-946, 2007 de Dombal, FT, Leaper, DJ, Staniland, JR, McCann, AP, Horrocks, JC (1972), Computer-aided

diagnosis of acute abdominal pain (1972). British Medical Journal 2:9-13, 2007

Page 111: An Automated Diagnostic Tool for Predicting Anatomical Response to

98

Dietz, EJ and TJ Killeen (1981), “A Nonparamteric Multivariate Test for Monotone Trend with Pharmaceutical Applications", J. of the American Statistical Association, Vol.76, No.373, Mar.1981, pp.169-174

Doemer, A, Den, R, Kubicek, G, Machtay, M, Xiao, Y (2008), Dosimetric evaluation of targets

and organs at risk for head and neck cancer cases during an adaptive treatment course. Proc of the 50th Annual ASTRO Meeting 189:S85, 2008

Erridge, SC, Seppenwoolde, Y, Muller, SH, van Herk, M, Jaeger, KD, Belderbos, JSA, Boersma,

LJ, Lebesque, JV (2003), Portal imaging to assess set-up errors, tumor motion, and tumor shrinkage during conformal radiotherapy of non-small cell lung cancer. Radiotherapy and Oncology 66:75-85, 2003

Feng, M and A Eisbruch (2007), Future issues in highly conformal radiotherapy for head and

neck cancer. Journal of Clinical Oncology 25:1009-1013, 2007 Hansen, EK, Bucci, MK, Quivey, JM, Weinberg, V, Xia, P (2006), Repeat CT imaging and

replanning during the course of IMRT for head-and-neck cancer. Int. J. Radiation Oncology Biol. Phys. 64:355-362, 2006

Herring, DF and DMJ Compton (1970), The degree of precision required in the radiation dose

delivered in cancer radiotherapy. British J of Radiology 5:1112-1118, 1970 Hong, TS, Tomé, WA, Chappell, RJ, Chinnaiyan, P, Mehta, MP, Harari, PM (2005), The impact

of daily setup variations on head-and-neck intensity modulated radiation therapy. Int. J. Rad. Onc. Biol. Phys. 61:779-788, 2005

Hurkmans, CW, Remeijer, P, Lebesque, JV, Mihnheer, BJ (2001), Set-up verification using

portal imaging; review of current clinical practice. Radiotherapy and Oncology 58:105-120, 2001

ICRU Report 50 (1978), Prescribing, Recording, and Reporting Photon Beam Therapy,

Washington DC, International Commission of Radiation Units and Measurements, 1978 Jaffray, DA, Yan, D, Wong, JW (1999), Managing geometric uncertainty in conformal intensity-

modulated radiation therapy. Seminars in Rad Onc 9:4-19, 1999. Jaffray, DA (2005), Emergent technologies for 3-dimensional image-guided radiation delivery.

Seminars in Radiation Oncology 15:208-216, 2005

Jenkins, W, Langen, K, Meeks, S, Olivera, G, Ruchala, K, Haimerl, J, Wagner, T, Willoughby, T, Kupelian, P (2005), Use of daily megavoltage CT imaging to determine delivered dose

Page 112: An Automated Diagnostic Tool for Predicting Anatomical Response to

99

variations in head and neck cancer patients. Proc of the 47th Annual ASTRO Meeting 63:S361, 2005

Johnston, ME, Langton, KB, Haynes, RB, Mathieu, A (1994), Effects of computer-based clinical

decision support systems on clinical performance and patient outcome: A critical appraisal of research. Annals of Internal Medicine 120:135-142, 1994

Kupelian, PA, Ramsey, C, Meeks, SL, Willoughby, TR, Forbes, A, Wagner, TH, Langen, KM

(2005), Serial megavoltage CT imaging during external beam radiotherapy for non-small-cell lung cancer: Observations on tumor regression during treatment. Int. J. Radiation Oncology Biol. Phys. 63:1024-1028, 2005

Lee, C, Langen, KM, Kupelian, PA, Meeks, SL, Mañon, RR, Lu, W, Haimerl, J, Olivera, GH

(2007), Positional an volumetric changes in parotid glands during head and neck radiation therapy assessed using deformable image registration. Proc of the 49th Annual ASTRO Meeting 69:S204, 2007

Liang, J, Chi, Y, Yan, D (2008), Cone beam CT image based adaptive inverse planning for head

and neck cancer: A validation study. Proc of the 50th Annual ASTRO Meeting 2962:S600, 2008

Lu, W, Olivera, G, Chen, Q, Ruchala, K, Haimerl, J, Meeks, S, Langen, K, Kupelian, P (2006),

Deformable registration of the planning image (kVCT) and the daily images (MVCT) for adaptive radiation therapy. Physics in Medicine and Biology 51:4357-4374, 2006

Marks, JE, Haus, AG, Sutton, HG, Griem, ML (1974), Localization error in the radiotherapy of

Hodgkin’s disease and malignant lymphoma with extended mantle fields. Cancer 34:83-90, 1974

Meldolesi, E, Wu, Q, Chen, P, Yan, D, Martinez, A (2006), Evaluation of anatomic and

dosimetric changes during treatment course of head and neck IMRT: Is replanning necessary? Proc of 48th Annual ASTRO Meeting 183:S101, 2006

Meyer, JL, Verhey, L, Xia, P, Wong, J (2007), New technologies in the radiotherapy clinic.

Front. Radiation Therapy Oncology 40:1-17, 2007 Michalski, JM, Graham, MV, Bosch, WR, Wong, J, Gerber, RL, Cheng, A, Tinger, A, Valicenti,

RK (1996), Prospective clinical evaluation of an electronic portal imaging device. Int. J. Radiation Oncology Biol. Phys. 34:943-951, 1996

Miles, EA, Clark, CH, Urbano, MTG, Bidmead, M, Dearnaley, DP, Harrington, KJ, A’Hern, R, Nutting, CM (2005, The impact of introducing intensity modulated radiotherapy

into routine clinical practice. Rad and Oncology 77:241-246, 2005

Page 113: An Automated Diagnostic Tool for Predicting Anatomical Response to

100

Oldham, M, Létourneau, D, Watt, L, Hugo, G, Yan, D, Lockman, D, Kim, LH, Chen, PY, Martinez, A, Wong, JW (2005), Cone-beam-CT guided radiation therapy: A model for on-line application. Rad and Oncology 75:271.e1-271.e8, 2005

Pouliot, J, Bani-Hashemi, A, Chen, J, Svatos, M, Ghelmansarai, F, Mitschke, M, Aubin, M, Xia,

P, Morin, O, Bucci, K, Roach, M, Hernandez, P, Zheng, Z, Hristov, D, Verhey, L (2005), Low-dose megavoltage cone-beam CT for radiation therapy. Int. J. Radiation Oncology Biol. Phys. 61:552-560, 2005

Ramsey, C, Mahan, S, Scaperoth, D, Chase, D (2004), Image-guided adaptive therapy for the

treatment of lung cancer. Proc of the 46th Annual ASTRO Meeting 1137:S339, 2004 Ramsey, CR, Langen, KM, Kupelian, PA, Scaperoth, D, Meeks, SL, Mahan, S, Seibert, R

(2006), A technique for adaptive image-guided helical tomotherapy for lung cancer. Int. J. Radiation Oncology Biol. Phys. 64:1237-1244, 2006

Seibert, R, Ramsey, CR, Hines, JW, Kupelian, PA, Langen, KM, Meeks, SL, Scaperoth, D

(2007), A model for predicting lung cancer response to therapy. Int. J. Radiation Oncology Biol. Phys. 67:601-609, 2007

Sharpe, MB, Brock, KK, Rehbinder, H, Forsgren, C, Lundin, A, Dawson, LA, Struder, G,

O’Sullivan, B, McNutt, TR, Kaus, MR, Lof, J, Jaffray, DA (2005), Adaptive planning and delivery to account for anatomical changes induces by radiation therapy of H&N cancer. Proc of the 47th Annual ASTRO meeting 5:S3, 2005

Siker, ML, Tomé, WA, Mehta, MP (2006), Tumor volume changes on serial imaging with

megavoltage CT for non-small-cell lung cancer during intensity-modulated radiotherapy: How reliable, consistent, and meaningful is the effect? Int. J. Radiation Oncology Biol. Phys. 66:135-141, 2006

Sohaib, SA, Turner, B, Hanson, JA, Farquaharson, M, Oliver, RTD, Reznek, RH (2000), CT

assessment of tumour response of treatment: comparison of linear, cross-sectional and volumetric measures of tumour size. The British Journal of Radiology 73:1178-1184, 2000

Sorcini, B and A. Tilikidis (2006), Clinical application of image-guided radiotherapy, IGRT (on

the Varian OBI platform). Cancer Radiothérapie 10:252-257, 2006 Wang, C, Chong, F, Wu, J, Lai, M, Cheng, J (2007), Body weight loss associates with set-up

error in nasopharyngeal cancer patients undergoing image guided radiotherapy. Proc of the 49th Annual ASTRO Meeting 69:S203, 2007

Page 114: An Automated Diagnostic Tool for Predicting Anatomical Response to

101

Woodford, CW, Yartsev, S, Dar, AR, Bauman, G, Van Dyk, J (2007), Adaptive radiotherapy planning on decreasing gross tumor volumes as seen on megavoltage computed tomography images. 69: 1316-1322, 2007

Wu, Q, Chi, Y, Chen, P, Yan, D, Martinez, A (2007), Adaptive replanning strategies accounting

tumor and normal tissue shrinkage for HN-IMRT. Proc of the 49th Annual ASTRO Meeting 69:S187, 2007

Yan, D, Vicini, F, Wong, J, Martinez, A (1997a), Adaptive radiation therapy. Phys. Med. Biol.

42:123-132, 1997 Yan, D, Wong, J, Vicini, F, Michalski, J, Pan, C, Frazier, A, Horwitz, E, Martinez, A (1997b),

Adaptive modification of treatment planning to minimize the deleterious effects of treatment setup errors. IJROBP 38:197-206, 1997

Yan, D, Ziaja, E, Jaffray, D, Wong, J, Brabbins, D, Vicini, F, Martinez, A (1998), The use of

adaptive radiation therapy to reduce setup error: A prospective clinical study. Int. J. Rad. Onc. Biol. Phys. 41:715-720, 1998

Zeidan, O, Langen, KM Meeks, S, Mañon, R, Wagner, T, Willoughby, T, Jenkins, DW,

Kupelian, PA (2007), Evaluation of image-guided protocols in the treatment of head and neck cancers. Int. J. Rad. Onc. Biol. Phys. 67:670-677, 2007

Page 115: An Automated Diagnostic Tool for Predicting Anatomical Response to

102

APPENDICES

Page 116: An Automated Diagnostic Tool for Predicting Anatomical Response to

103

APPENDIX A

Page 117: An Automated Diagnostic Tool for Predicting Anatomical Response to

104

A.1 ADDITIONAL DEMOGRAPHIC INFORMATION

This section includes weight loss information collected for each patient. It also reports

those patients who required the construction of a new facemask, indicated in YYYEEELLLLLLOOOWWW. The two

following figures show the data for those 2 patients.

Patient Weight Loss Information

Patient Number

Beginning Weight

(lbs)

Ending Weight

(lbs)

Total Weight Loss

Total Percent of Weight Lost

(%)

# of Elapsed Days at Ending Weight

Had new mask made

1 176 158 18 10.2 60 n 2 102.5 95 7.5 7.3 68 n 3 135.5 116 19.5 14.4 55 n 4 194 183.5 10.5 5.4 63 n 5 144 136 8 5.6 54 n 6 109 109 0 0.0 57 n 7 220 162.5 57.5 26.1 71 n 8 198.5 182 16.5 8.3 47 n 9 153 131.5 21.5 14.1 62 n

10 132.5 140 N/A N/A 43 n 11 194 176 18 9.3 74 y 12 218 202.5 15.5 7.1 48 n 13 152 146.5 5.5 3.6 62 n 14 196 168 28 14.3 65 n 15 212 192 20 9.4 62 n 16 179.5 150 29.5 16.4 62 n 17 298.5 261 37.5 12.6 60 y 18 171.5 168.5 3 1.7 47 n 19 58 70.5 N/A N/A 64 n 20 160.5 137 23.5 14.6 58 n 21 163.5 152 11.5 7.0 55 n 22 230 209 21 9.1 50 n 23 156 148 8 5.1 34 n 24 185 170.5 14.5 7.8 59 n 25 139.5 139.5 0 0.0 46 n 26 86.75 90.5 N/A N/A 55 n

Page 118: An Automated Diagnostic Tool for Predicting Anatomical Response to

105

27 211.5 202.5 9 4.3 63 n 28 232 215.75 16.25 7.0 63 n 29 193 176 17 8.8 65 n 30 N/A N/A N/A N/A N/A n

Page 119: An Automated Diagnostic Tool for Predicting Anatomical Response to

106

First of Two Patients to Have New Mask

5 10 15 20 25 30 355

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 355

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean Line

Crossover Line

5 10 15 20 25 30 355

5.5

6

6.5

7

7.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Trimmed Mean Line

Crossover Point

Patient 11

5 10 15 20 25 30 355

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 120: An Automated Diagnostic Tool for Predicting Anatomical Response to

107

Second Patient to Have New Mask

5 10 15 20 25 30 35 40 455.5

6

6.5

7

7.5

8

8.5

9

9.5

10

10.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35 40 455.5

6

6.5

7

7.5

8

8.5

9

9.5

10

10.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean Line

Crossover Line

5 10 15 20 25 30 35 40 455.5

6

6.5

7

7.5

8

8.5

9

9.5

10

10.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Trimmed Mean Line

Crossover Point

Patient 17

5 10 15 20 25 30 35 40 455.5

6

6.5

7

7.5

8

8.5

9

9.5

10

10.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 121: An Automated Diagnostic Tool for Predicting Anatomical Response to

108

APPENDIX B

Page 122: An Automated Diagnostic Tool for Predicting Anatomical Response to

109

B.1 MEASUREMENT DATA FOR ALL 30 PATIENTS

The first set of figures in this appendix are intended to illustrate how a reduction of the

dimensionality of the data was done for each patient and each measurement. To reiterate the

method, 4 different lateral measurements were taken at the C1-C4 vertebral levels and then each

measurement was taken out from its respective vertebra level. The result is having all of the

"ones", "twos","threes", and "fours" put together to create 4 different models for each patient.

The figures following these show which patients experienced anatomical changes that

caused their cross-sectional measurements to change by more than 0.5 cm, indicated by the

vertical lines and labelled "Measurement Crossover." The result from these measurement

changes indicates a need for a new facemask, however it was previously discussed that only 2

patients out of the 30 actually received a new mask.

Page 123: An Automated Diagnostic Tool for Predicting Anatomical Response to

110

Page 124: An Automated Diagnostic Tool for Predicting Anatomical Response to

111

Page 125: An Automated Diagnostic Tool for Predicting Anatomical Response to

112

Page 126: An Automated Diagnostic Tool for Predicting Anatomical Response to

113

Page 127: An Automated Diagnostic Tool for Predicting Anatomical Response to

114

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 3

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 128: An Automated Diagnostic Tool for Predicting Anatomical Response to

115

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 3

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 129: An Automated Diagnostic Tool for Predicting Anatomical Response to

116

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30

4

5

6

7

8

9

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 4

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 130: An Automated Diagnostic Tool for Predicting Anatomical Response to

117

5 10 15 20 25 30

4

5

6

7

8

9

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 4

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 131: An Automated Diagnostic Tool for Predicting Anatomical Response to

118

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 5

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 132: An Automated Diagnostic Tool for Predicting Anatomical Response to

119

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 5

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 133: An Automated Diagnostic Tool for Predicting Anatomical Response to

120

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 6

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 134: An Automated Diagnostic Tool for Predicting Anatomical Response to

121

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 6

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 135: An Automated Diagnostic Tool for Predicting Anatomical Response to

122

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30

4

5

6

7

8

9

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 7

5 10 15 20 25 30

4

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 136: An Automated Diagnostic Tool for Predicting Anatomical Response to

123

5 10 15 20 25 30

4

5

6

7

8

9

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 7

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 137: An Automated Diagnostic Tool for Predicting Anatomical Response to

124

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 8

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 138: An Automated Diagnostic Tool for Predicting Anatomical Response to

125

5 10 15 20 25 30

5

5.5

6

6.5

7

7.5

8

8.5

9

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 8

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 139: An Automated Diagnostic Tool for Predicting Anatomical Response to

126

5 10 15 20 25 30 35 40

4

5

6

7

8

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35 40

4

5

6

7

8

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35 40

4

5

6

7

8

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 9

5 10 15 20 25 30 35 40

4

5

6

7

8

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 140: An Automated Diagnostic Tool for Predicting Anatomical Response to

127

5 10 15 20 25 30 35 40

3.5

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 9

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 141: An Automated Diagnostic Tool for Predicting Anatomical Response to

128

5 10 15 20 25 30

3

3.5

4

4.5

5

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30

3

3.5

4

4.5

5

5.5

6

6.5

7

7.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30

3

3.5

4

4.5

5

5.5

6

6.5

7

7.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 10

5 10 15 20 25 30

3

3.5

4

4.5

5

5.5

6

6.5

7

7.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 142: An Automated Diagnostic Tool for Predicting Anatomical Response to

129

5 10 15 20 25 30

3

3.5

4

4.5

5

5.5

6

6.5

7

7.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 10

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 143: An Automated Diagnostic Tool for Predicting Anatomical Response to

130

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 11

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 144: An Automated Diagnostic Tool for Predicting Anatomical Response to

131

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 11

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 145: An Automated Diagnostic Tool for Predicting Anatomical Response to

132

5 10 15 20 25 30 35 40 45

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35 40 45

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35 40 45

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 12

5 10 15 20 25 30 35 40 45

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 146: An Automated Diagnostic Tool for Predicting Anatomical Response to

133

5 10 15 20 25 30 35 40 45

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 12

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 147: An Automated Diagnostic Tool for Predicting Anatomical Response to

134

5 10 15 20 25 30 35 40

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35 40

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35 40

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 13

5 10 15 20 25 30 35 40

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 148: An Automated Diagnostic Tool for Predicting Anatomical Response to

135

5 10 15 20 25 30 35 40

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 13

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 149: An Automated Diagnostic Tool for Predicting Anatomical Response to

136

5 10 15 20 25 30 354

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 354

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 354

5

6

7

8

9

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 14

5 10 15 20 25 30 354

5

6

7

8

9

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 150: An Automated Diagnostic Tool for Predicting Anatomical Response to

137

5 10 15 20 25 30 35

4

4.5

5

5.5

6

6.5

7

7.5

8

8.5

9

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 14

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 151: An Automated Diagnostic Tool for Predicting Anatomical Response to

138

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 15

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Treatment NumberC

ross

Se

ctio

na

l Me

asu

rme

nt(

cm)

Measurement Four

Page 152: An Automated Diagnostic Tool for Predicting Anatomical Response to

139

5 10 15 20 25 30 35

4.5

5

5.5

6

6.5

7

7.5

8

8.5

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 15

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 153: An Automated Diagnostic Tool for Predicting Anatomical Response to

140

5 10 15 20 25 30 353

4

5

6

7

8

9

10

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement One

5 10 15 20 25 30 353

4

5

6

7

8

9

10

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Two

Trimmed Mean

Vertebra 1

Vertebra 2Vertebra 3

Vertebra 4

5 10 15 20 25 30 353

4

5

6

7

8

9

10

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Treatment Number

Measurement Three

Patient 16

5 10 15 20 25 30 353

4

5

6

7

8

9

10

Treatment Number

Cro

ss S

ect

ion

al M

ea

surm

en

t(cm

)

Measurement Four

Page 154: An Automated Diagnostic Tool for Predicting Anatomical Response to

141

5 10 15 20 25 30 35

3

4

5

6

7

8

9

10

Cro

ss S

ect

ion

al M

ea

sure

me

nt(

cm)

Treatment Number

Patient 16

Measurement One Crossover

Measurement Two Crossover

Measurement Three Crossover

Measurement Four Crossover

Measurement 1

Measurement 2Measurement 3

Measurement 4

Page 155: An Automated Diagnostic Tool for Predicting Anatomical Response to

142

Page 156: An Automated Diagnostic Tool for Predicting Anatomical Response to

143

Page 157: An Automated Diagnostic Tool for Predicting Anatomical Response to

144

Page 158: An Automated Diagnostic Tool for Predicting Anatomical Response to

145

Page 159: An Automated Diagnostic Tool for Predicting Anatomical Response to

146

Page 160: An Automated Diagnostic Tool for Predicting Anatomical Response to

147

Page 161: An Automated Diagnostic Tool for Predicting Anatomical Response to

148

Page 162: An Automated Diagnostic Tool for Predicting Anatomical Response to

149

Page 163: An Automated Diagnostic Tool for Predicting Anatomical Response to

150

Page 164: An Automated Diagnostic Tool for Predicting Anatomical Response to

151

Page 165: An Automated Diagnostic Tool for Predicting Anatomical Response to

152

Page 166: An Automated Diagnostic Tool for Predicting Anatomical Response to

153

Page 167: An Automated Diagnostic Tool for Predicting Anatomical Response to

154

Page 168: An Automated Diagnostic Tool for Predicting Anatomical Response to

155

Page 169: An Automated Diagnostic Tool for Predicting Anatomical Response to

156

Page 170: An Automated Diagnostic Tool for Predicting Anatomical Response to

157

Page 171: An Automated Diagnostic Tool for Predicting Anatomical Response to

158

Page 172: An Automated Diagnostic Tool for Predicting Anatomical Response to

159

Page 173: An Automated Diagnostic Tool for Predicting Anatomical Response to

160

Page 174: An Automated Diagnostic Tool for Predicting Anatomical Response to

161

Page 175: An Automated Diagnostic Tool for Predicting Anatomical Response to

162

Page 176: An Automated Diagnostic Tool for Predicting Anatomical Response to

163

Page 177: An Automated Diagnostic Tool for Predicting Anatomical Response to

164

Page 178: An Automated Diagnostic Tool for Predicting Anatomical Response to

165

Page 179: An Automated Diagnostic Tool for Predicting Anatomical Response to

166

Page 180: An Automated Diagnostic Tool for Predicting Anatomical Response to

167

Page 181: An Automated Diagnostic Tool for Predicting Anatomical Response to

168

Page 182: An Automated Diagnostic Tool for Predicting Anatomical Response to

169

Page 183: An Automated Diagnostic Tool for Predicting Anatomical Response to

170

Percent of Interpolated Data Used Modeling for Each Patient

0.0% 5.0% 10.0% 15.0% 20.0% 25.0% 30.0% 35.0% 40.0% 45.0% 50.0%

123456789

101112131415161718192021222324252627282930

Pat

ien

t N

um

ber

Percent of Data Interpoated

Page 184: An Automated Diagnostic Tool for Predicting Anatomical Response to

171

B.2 CONFERENCE PRESENTATION AND AWARD

Harris, C, A. Usynin, R. Seibert, and C. Ramsey (2008) “A Novel Technique for Predicting Which Head & Neck Patients will Require Adaptive Therapy”, Proceedings of the 50th Annual American Society of Radiologic Therapeutic Oncology (ASTRO) Conference, (72) pp.S84-85, 2008. This submission includes the preliminary results presented in this proposal. The presentation at the conference showed the methodology used for the initial developments of the IGRT tool. This submission was selected as the physics winner of the Resident Clinical Basic Science Research Award, which is the highest award given at this conference for medical physics residents. Once more work is done, it will be submitted to ASTRO’s journal (The Red Journal) for publication.

Page 185: An Automated Diagnostic Tool for Predicting Anatomical Response to

172

VITA

Carley Harris was born in Jackson, TN on November 1, 1982. She moved to Dickson, TN

before the beginning of high school and eventually graduated from Dickson County High School

in May 2001. She began attendance at the University of Tennessee, Knoxville in the summer of

2001 and graduated with her Bachelor of Science in Biomedical Engineering in May 2005 and

with her Masters of Science in Nuclear Engineering in May 2007.

As an undergraduate, she was a teaching assistant for the Freshman Engineering

Fundamentals Program (Engage) for 2 years and then would eventually become a graduate

teaching assistant for the Honors Section of the program. After entering the Nuclear Engineering

Department, she worked for the Engage program, took graduate classes, and volunteered at the

Thompson Cancer Survival Center for almost 2 years. It was at the cancer center where she

conducted research under the direction of Dr. Chester Ramsey, and would eventually become

employed as a Medical Physics Resident for 2 years.

She has completed and presented 5 conference publications as the primary author and has

co-authored 8 other conference publications.

While at the University of Tennessee, she was a member of the Pride of the Southland

Marching Band for 2 years, playing piccolo the first year and then baritone the second year. She

was also a member of Gamma Sigma Sigma National Service Sorority for 2 years.