[ieee 2006 9th international conference on information fusion - florence (2006.07.10-2006.07.13)]...

7
Fusion Considerations in Monitoring and Handling Agitation Behaviour for Persons with Dementia Victor Foo Siang Fook, Qiang Qiu, Jit Biswas, Aung Aung Phyo Wai Institute for Infocomm Research, Singapore 21 Heng Mui Keng Terrace, Singapore 119613 {sffoo, qiu, biswas, apwaung}@ i2r.a-star.edu.sg Abstract - This paper presents the fusion considerations for a smart hospital application. In particular, we present the subtle design and implementation of a fusion architecture for monitoring and handling agitation behaviour for persons with dementia. In addition, we exploit Semantic Web standards to provide a reusable fusion middleware support for providing services that facilitate care-giving and clinical assessment of dementia patients in a context enlightenedfashion. Keywords: Fusion considerations, monitoring and handling, agitation, persons with dementia, fusion architecture 1 Introduction There is an increasing interest worldwide in applying the latest developments in pervasive computing, context aware systems and sensor networks [1,2] for healthcare. One specific area of focus is to develop smart hospital applications such as context aware hospital bed and pill container [3], activities tracking system [4], etc. Although there are some works describing prototyping of smart hospital applications, there is still limited work taking into account the detailed fusion considerations using multimodal sensors for monitoring and handling patients in a pervasive clinical manner, and the applications tend to take the piecemeal approach. This paper will seek to bridge the gap by describing the subtle design and implementation of a smart healthcare application from the fusion perspective. As a first step, we target applications on persons with dementia, and behaviors that are of clinical interest to the doctors. One common behavior of persons with dementia is agitation. One of the challenges faced by doctors and caregivers is the detailed and continuous monitoring of such agitation behavior. Due to the nature of the observations that must be taken, time- pressed doctors and over-stressed caregivers are not the ideal people to make detailed records of behavioral patterns of these individuals. Our fusion works involve collaboration with a local hospital to semi-automatically monitor elderly patients with dementia in a hospital ward for the purpose of detecting the onset of agitated behavior and facilitate clinical assessment and infornal care-giving in a context enlightened fashion. Our work is based on the Scale to Observe Agitation in Persons with Dementia of the Alzheimer Type (SOAPD) [5]. Developed by Ladislav Volicer, Ann Hurley and Lois Camden, leading authorities in the world on palliative care for patients with dementia, this tool seeks to objectively classify the degree of agitation experienced by a demented person. In order to perform SOAPD measurements, observation of patients must be made from several angles and along several dimensions such as vocalizations, whole body movements, partial body movements, repetitive movements and so on. Human observers often tend to miss out on one or more types of behavioral patterns while focusing on a particular pattern of interest. With the use of modem sensor and networking technology, and with algorithmic techniques based on sensed context information and fused information from various combinations of sensing modalities, we hope to successfully detect the onset of agitated behavior. Furthermore, we hope to support flexible and standardized schemes for automated intervention triggering and activity planning management so as to handle dementia patients in a context enlightened fashion. In this paper, we present the fusion considerations for multimodal sensors in a distributed manner to monitor and handle dementia patients. The rest of the paper is organized as follows: Section 2 discusses the fusion design considerations for dementia patient monitoring application. Section 3 describes a fusion middleware for monitoring and handling dementia patients. Section 4 presents some of the preliminary results we collected. Section 5 concludes with a discussion on future work. 2 Fusion Design In this section, we will describe our fusion design considerations for adaptive monitoring and handling of persons with dementia based on feedback from doctors, caregivers and our prototyping experience. Our initial work focuses on four of the behavioral features of SOAPD, namely the Total Body Movement, Up Down Movement, Repetitive Movement and Outward Movement. The multimodal sensors adopted for understanding the agitated behaviours are ultrasound sensors, optical fiber grating pressure sensors, acoustic sensors such as microphones or microphone arrays, infrared sensors, RFID and video cameras. In addition we are also investigating the possiblity of using microphones for studying the human vocalization related SOAPD

Upload: aung

Post on 24-Mar-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

Fusion Considerations in Monitoring and HandlingAgitation Behaviour for Persons with Dementia

Victor Foo Siang Fook, Qiang Qiu, Jit Biswas, Aung Aung Phyo WaiInstitute for Infocomm Research, Singapore

21 Heng Mui Keng Terrace, Singapore 119613{sffoo, qiu, biswas, apwaung}@ i2r.a-star.edu.sg

Abstract - This paper presents the fusion considerationsfor a smart hospital application. In particular, we presentthe subtle design and implementation of a fusionarchitecture for monitoring and handling agitationbehaviour for persons with dementia. In addition, we

exploit Semantic Web standards to provide a reusablefusion middleware support for providing services thatfacilitate care-giving and clinical assessment ofdementiapatients in a context enlightenedfashion.

Keywords: Fusion considerations, monitoring andhandling, agitation, persons with dementia, fusionarchitecture

1 Introduction

There is an increasing interest worldwide in applying thelatest developments in pervasive computing, contextaware systems and sensor networks [1,2] for healthcare.One specific area of focus is to develop smart hospitalapplications such as context aware hospital bed and pillcontainer [3], activities tracking system [4], etc. Althoughthere are some works describing prototyping of smarthospital applications, there is still limited work takinginto account the detailed fusion considerations usingmultimodal sensors for monitoring and handling patientsin a pervasive clinical manner, and the applications tendto take the piecemeal approach. This paper will seek tobridge the gap by describing the subtle design andimplementation of a smart healthcare application from thefusion perspective. As a first step, we target applicationson persons with dementia, and behaviors that are ofclinical interest to the doctors. One common behavior ofpersons with dementia is agitation. One of the challengesfaced by doctors and caregivers is the detailed andcontinuous monitoring of such agitation behavior. Due tothe nature of the observations that must be taken, time-pressed doctors and over-stressed caregivers are not theideal people to make detailed records of behavioralpatterns of these individuals. Our fusion works involvecollaboration with a local hospital to semi-automaticallymonitor elderly patients with dementia in a hospital wardfor the purpose of detecting the onset of agitated behaviorand facilitate clinical assessment and infornal care-givingin a context enlightened fashion.Our work is based on the Scale to Observe Agitation inPersons with Dementia of the Alzheimer Type (SOAPD)

[5]. Developed by Ladislav Volicer, Ann Hurley and LoisCamden, leading authorities in the world on palliativecare for patients with dementia, this tool seeks toobjectively classify the degree of agitation experienced bya demented person. In order to perform SOAPDmeasurements, observation of patients must be made fromseveral angles and along several dimensions such asvocalizations, whole body movements, partial bodymovements, repetitive movements and so on. Humanobservers often tend to miss out on one or more types ofbehavioral patterns while focusing on a particular patternof interest. With the use ofmodem sensor and networkingtechnology, and with algorithmic techniques based onsensed context information and fused information fromvarious combinations of sensing modalities, we hope tosuccessfully detect the onset of agitated behavior.Furthermore, we hope to support flexible andstandardized schemes for automated interventiontriggering and activity planning management so as tohandle dementia patients in a context enlightened fashion.

In this paper, we present the fusion considerations formultimodal sensors in a distributed manner to monitorand handle dementia patients. The rest of the paper isorganized as follows: Section 2 discusses the fusiondesign considerations for dementia patient monitoringapplication. Section 3 describes a fusion middleware formonitoring and handling dementia patients. Section 4presents some of the preliminary results we collected.Section 5 concludes with a discussion on future work.

2 Fusion Design

In this section, we will describe our fusion designconsiderations for adaptive monitoring and handling ofpersons with dementia based on feedback from doctors,caregivers and our prototyping experience. Our initialwork focuses on four of the behavioral features ofSOAPD, namely the Total Body Movement, Up DownMovement, Repetitive Movement and OutwardMovement. The multimodal sensors adopted forunderstanding the agitated behaviours are ultrasoundsensors, optical fiber grating pressure sensors, acousticsensors such as microphones or microphone arrays,infrared sensors, RFID and video cameras. In addition weare also investigating the possiblity of using microphonesfor studying the human vocalization related SOAPD

Page 2: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

indicators, such as High Pitched or Loud Words,Repetitive Vocalization and Negative Words.

2.1 Design Considerations

We deal with fusion at sensor, ward and hospital levelbased on the requirements of the doctors and caregivers.We will not exhaustively list all the considerations butselectively elaborate on the important ones that arerelevant from a clinical perspective. For sensor levelconsideration, we focus on dealing with quality,uncertainty and data association issues. For ward levelconsideration, we focus more on situation awareness andsummarized report issues. For hospital levelconsideration, we focus more on exploiting knowledgebase of user's intention such as the doctors and caregiversfor intervention and planning.

2.1.1 Sensor Level

In dementia patients monitoring applications, dataacquired from sensors are used for many differentpotential clinical purposes, e.g., detecting patients'agitation for possible intervention, monitoring theeffectiveness of medicine-taking, studying the possiblefactors related to agitation, etc. From a clinicalperspective, two basic considerations during the sensordata fusion are, a) to correctly associate sensor readingswith monitored events and, b) to minimize datauncertainty. One effective solution is the properdeployment of sensors in a manner that increases thepossibility for a sensor's reading to be confirmed by thereadings of other sensor positioned nearby. However, thesensor redundancy introduced by this approach increasesthe system cost and resource utilization in terms ofprocessing, energy and bandwidth. The data qualityrequirements posed by queries should be used as a basisto determine the level of sensor redundancy. In thissection, the considerations on sensor lever fusion in aclinical context are further elaborated along twodimensions, namely Inter-sensor fusion and Inter-modalityfusion.

Inter-sensors Fusion: Multiple sensors of the samemodality can be deployed to increase the total geographicsensing coverage and also to reduce the uncertainty of thedata from a single sensor. In our hospital ward sensordeployment, multiple ceiling mounted ultrasonic sensorsare used to provide better coverage for localizationinformation, which is used mainly to monitor certaintypes of agitation and to detect the presence of doctorsand caregivers. Within the cone-shape sensing area, thevariation of the distance reading from an ultrasonic sensormay be interpreted as either as noise or the presence of aperson anywhere within a circle area with the radius ofh*tan(a), where h is the height of the ceiling and oc is theangle between center beam and sensing boundary of theultrasonic sensor. However, when a neighboringultrasonic sensor exhibits similar reading variations, thepossibility for there being noise may be ignored and thepresence of a person may be further localized into the

intersection of the two respective circles. Placing sensorstoo close to each other may create the problem ofinterference. The effects of interference are taken intoaccount as a trade-off against the data precisionrequirements of applications which consume the sensordata.

Inter-modality Fusion. Sensors of different modalitiesmay be deployed to reduce their mutual data uncertaintyin a similar way as that used for two or more sensors ofthe same modality. For example, a passive infrared sensor(PIR) can report the presence of a person within its circle-shape sensing area. Therefore, a PIR may enhance thelocalization information from an ultrasonic sensor in thesame way as the ultrasonic modality itself. In somesituations, different sensor modalities exhibit mutualsupplementing functions to increase the detection rates ofcertain events. In Up Down Movement (UDM) agitationdetection (one of the types of agitation being measured),ultrasound and pressure sensors can supplement eachother in capturing the following observed behaviorpatterns of patients:

1. While agitated, patients may only trigger ultrasonicsensors but not pressure sensors by avoiding constantlytouching the surface where the pressure sensors aredeployed.2. While agitated, patients may only trigger pressuresensors but not ultrasonic sensors by moving body withhigh intensity but low amplitude.3. While not agitated, patients may still trigger pressuresensors but not ultrasonic sensors by adjusting bodyposition normally but frequently.A significantly higher agitation recognition rate isachieved with both ultrasonic and pressure sensormodalities deployed, comparing to the detections fromsingle modality as shown in the experimental section.

It is noted that one good strategy discovered duringmultimodality sensor deployment is to maximize thepossibility to confirm sensor readings by those sensorsources that are primarily used to provide other uniqueinformation. For example, while identifying entities, e.g.,visitors, medicines, etc., with their RFID tags, a RFIDreader could also enhance the localization informationfrom other modalities such as PIR or ultrasonic sensorsby reporting the presence of an entity within its validcone-shape sensing area.

2.1.2 Ward Level

At the ward level, the doctors and caregivers would liketo have a daily or monthly customized summarizedbehavior report on all agitation behaviors of the patient.Due to the unpredictable behavior patterns of patients anduncontrollable sensor reading errors, an inference engine,such as a Bayesian network, is necessary for any reliablereport on agitation behavior detection given featuresextracted from sensor level. During the ward levelfusion, the most vital step is to select appropriate sensormodalities to fuse. It is shown in the experimental sectionthat sensor modalities that give no significant evidence to

Page 3: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

events to be monitored will greatly hinder the overallevent detection rate which will in turn generate inaccuratesummarized reports on patients. It is also noted that therelationship between an agitated behavior and thedetection results from different sensor modalities is notobvious. For example, the sensor modalities, e.g.,ultrasonic or pressure sensors, triggered by UDM are notconstant; and symptoms of the same agitated behavior canalso be varied from patient to patient, e.g., some patientstend to make sounds during UDM while the others not.Therefore, it is more appropriate to construct andcustomize a Bayesian network by learning from thetraining data sets. In our approach, we collect a set oftraining data, and use this to tune and optimize theBayesian network initially constructed with theassumption that every sensor deployed is contributing toall the events to be monitored. Then the customizedBayesian networks are used in the actual monitoring tofuse the readings from multiple modalities of sensors.

During the training data set collection, in addition to thedata on agitation detection reported by each modality ofsensors, a series of time-stamped manual observations arealso required for the training set. Whenever agitationbehaviors of patients are observed, some simple inputmechanisms, e.g., through pressing anywhere on thekeyboard, is used by a human observer to notify thesystem the current timestamp. The training data is furtherprocessed based on the additional clinical criteria toobtain a combined agitation record from both manualobservations and sensor detections. For example, for theshort duration agitation detection, the time domain isuniformly divided into 16-second slices and a binaryvalue is assigned to each time slice for both manualobservation and detection of sensors from each modality,with '1' for agitation presence and 'O' for no agitationobserved. Such processed binary values will be used tocompute the Conditional Probability Table (CPT) forconstructing a Bayesian network. The score of the currentnetwork B is computed using the scoring metric definedas [6]:

m IB IlgL'(E, B) = - logp(v) + 2logi

i=l

(1)

where:* E: the data consists ofm samples.* p(vd: the joint probability that the variable has

the values specified by vi* IB: the number of parameters in B* vi: an n-dimensional vector of values of the n

variables

Starting from an initial Bayesian network, for each node,we try to delete an existing edge to its nearest neighbor. Ifsuch action causes the score calculated by the scoringmetric to decrease, the resulting network is retained or

otherwise we undo the change and continue. Eachiteration of the algorithm involves performing this edgedeletion action for each node in the net. If the score doesnot change in an entire iteration, we have reached a fixedpoint and have found a locally optimal network that has a

lower score than all the alternatives. The processterminates with a network customized to a particularpatient. It is noted that the manual observations requiredfor training data set can be an ongoing process fromdoctors and caregivers, and the Bayesian network maytherefore be continually improved and optimizedwhenever more training data are available.

2.1.3 Hospital Level

At the hospital level, a key challenge is to fuse all relatedinformation from the sensors and other informationsources with knowledge of user's intent such as thedoctors or caregivers to come out with a knowledgemanagement system or electronic patient system thatinterfaces with the multitude of other systems that actuatespecific interventions, e.g. music therapy systems or thatprovide important context information e.g. a care-giver'sdigital calendar and mobile phone, and a doctor'sdecision-making aid, to support automated interventiontriggering and activity planning/drug therapymanagement facilitation. It should have the capability toallow the doctors and caregivers to search the system fordesired information and then establish a connection to thedesired service.

In more advanced requirements, we will likely look tofacilitating effective drug therapy though side-effectmonitoring. This is because most drugs used to treatdementia have many serious side-effects, and hencedetermination of the best dosage (i.e. treat symptoms withleast side-effects) is important. At present, dosagetitration is based largely on incomplete informationprovided by familial caregivers through reviews. Withautomated monitoring and a good model relating to drugsand their side-effects, the system can be extended tomonitor for behavioral changes in relation to the sideeffect.

We exploits Semantic Web standards [7,8] to provide areusable fusion middleware support for flexible eventrepresentation, query and reasoning, and standardizedschemes for automated intervention triggering andactivity planning to handle agitation detected in personswith dementia. To augment ambient intelligence for theapplication and services, we will explore the use of DLImplementation Group (DIG) [9] compliant classifiers,specifically RACER reasoner [10] to make inferencesover our ontology base. Context reasoning will largely beemployed to detect mid-to-long term patterns of disturbedbehaviors (e.g. time of occurrence) and cognitive declineas well as short-term analysis relating to new drugadministration. As such, it may involve the fusion ofinformation in the form of ontology from two differentsources. To illustrate, assuming that the system candiscern automatically when a patient is emotionallydisturbed, the doctors and caregivers may proceed tomake provisions for timely therapeutic interventions. Asan example, they may want the system to performautomated intervention that provides a combination ofrelaxing music and colored images which may decreaseagitated behaviour of a patient, or plainly need the system

Page 4: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

to send an SMS message to them. Mechanisms forintervention in many existing systems are usually hard-coded. In our system, using ontology and web servicesapproach, we can fuse the information of multipleontologies from different sources as shown in Figure 1, 2and 3 to provide flexible intervention by interfacing withthe multitude of other systems that actuate specificinterventions.

Trigger threshold_ xsd:decimal

affectmonitor

Result ActionhasResult AL

Observation xsd:decimalhasResult

scoreinstance I

Atifation evaluateWith SOAPDIg Quantefication Otlog

Figure I1: Intervention OntologyFigure 4: Reusable Fusion Middleware Architecture

Trigger

instance

Activate | hrehold * *2505.5CaregiverAlert theol 5O5

*2505 5 = 377 8 (TB/MO) + 388.4 (UP/DO) +

197.4 (RE/PL) + 200 0 (OU/MO) +

524.8 (HI/LO) + 3591 (RENO) +

458.0 (NE/WO)affect

System performBy SendSMS performOn F Immedilyte

Figure 2 : Actuate SendSMS Intervention from System A

Trigger

instance

Activate krhd *1127.9MusicTherapy

*1127.9 = 188.9 (TB/MO) + 194.2 (UP/DO) +

37.3 (RE/PL) + 36.5 (OU/MO) +

262.4 (HI/LO) + 179.6 (RENO) +

229.0 (NEAWO)affect

System 4performBy PlayMusic performOn Speaker

performAt

Location_l

Figure 3 : Actuate Music Intervention from System B

3 Fusion Middleware Architecture

Here, we describe our reusable fusion middlewarearchitecture as shown in Figure 4 which aims to helpapplication developers to design a patient data collectionsystem and facilitates fusion of information fromdistributed sensor sources or other information sources tobuild healthcare applications more efficiently andeffectively.

Tier-la Fusion Node (Sensor Level Fusion): It is thesensor itself which possesses the capability to performfusion locally and able to log the raw data in its localdatabase.

Tier-lb Fusion Node (Sensor Level Fusion): It can be thesensor itself (i.e. co-located with Tier-la node) or basestation connecting to a number of sensors which performsdata fusion. It is responsible for data association andactivity tracking algorithms, and runs them whenever itreceiving the raw data from the sensor, and inserts thedata with or without fusion into the local database. Thesensor data are decomposed as three levels consisting ofthe raw, aggregated and summarized level. Raw level dataare data with pre-processing but without data fusion.Aggregated level data are data/information for multi-sensors of the same modality with pre-processing anddata fusion i.e. after feature extraction and classification.Likewise, summarized level data are data/information forsensors with different sensor modality with pre-processing and data fusion. The raw data is kept in thelocal database but only summarized level and aggregatedlevel information are allowed to send to the Tier 2 fusionnode for scalability purpose. The information being sentto the Tier-2 fusion node will have to be indexed by time.The value of time will be obtained from a server using theNTP [11] protocol.It is also UPnP-enabled [12]. Since our systempredictably involves the use of a wide range of sensors,UPnP with its standard-based, dynamic and flexiblearchitecture supporting zero-configuration and automaticdiscovery will allow us to easily add or replace sensorsand reconfigure them with minimum effort. This resultsin ease of actual deployment and subsequent maintenanceof the system.As an illustration using our application as example, theTier- lb fusion node can employ sophisticated featureextraction algorithm to extract the features of patient suchas moment, centroid and contour using the video camera

Page 5: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

modality, moment of patient in a vertical axis usingultrasound sensor and movement of patient from oneregion to another using pressure sensors.

Tier-2 Fusion Node (Sensor/Ward Level Fusion): It ishere whereby low level information and featuresextracted from the sensors are fused together to form highlevel context that are more meaningful and relevant tohuman. It consists of a repository, query engine, inferenceengine, Bayes engine and UPnP control point. Sesame[13] provides the context storage, with Sesame RDFQuery Language (SeRQL) as the context query language.The query engine provides the abstract interface forapplications to extract desired contexts. The inferenceengine consists of a variety of techniques ranging fromrule based systems to neural networks and fuzzy logic, toaid in the decision making process by injecting rules orlogic encoded into the inferencing stage. The UPnPcontrol point coordinates the discovery of behavioralcontext of the bedridden patients and disseminates thisinformation to the ontology knowledge base using SOAPmessages. A Bayes engine is integrated to performinformation fusion between multiple modality sensors.

Tier-3 Fusion Node (Hospital/Ward Level Fusion) It isthe node which provides customization for fusing higherlevel contexts into one that is relevant to an application orservices. It consists of web service daemons and anapplication server using tomcat that is integrated to aknowledge base consisting of the user's intent. Auxiliarysources in the form of ontology for providing the contextof the patient and related personnel in term of profiles,schedules, social networks, etc. can be integrated into thenode. The situation awareness module helps to discoverimportant relationships previously not known to exist,and perform fusion and correlate a variety of higher levelinformation or data types into knowledge that form thebasis for triggering intervention. Specifically, it performsfusion of information in different ontologies frommultiple sources.

4 Experiments and Results

We are putting together an observational system in ahospital ward composed of multiple sensors of differentmodalities in order to collect data from dementia patientsas shown in Figure 5 and 6. It is hoped that through thedeployment of these sensors coupled with our fusionarchitecture and technique, we can automate the detectionof the onset of agitation in dementia patients and triggerintervention. While the preliminary deployment phase fordata collection is ongoing, we perform a number ofexperiments to evaluate whether our fusionconsiderations can meet the requirements of doctors andcaregivers.

Figure 5 : Sensors Deployed in Ward

Figure 6 : Some Pictures of Sensors Deployed in Ward

For sensor and ward level fusion considerations, we focuson the results for Up Down Movement agitation behavioron both the bed and chair. Figure 7 and 8 shows theBayesian networks to deal with uncertainty and improverecognition rate for UDM in chair and bed respectively.

Figure 7: Bayesian Network for Chair UDM Monitoring

Page 6: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

Table 3: Experimental Results on UDM Agitation in BedRecognition Rate Reducing with inappropriate SensorModality selected through Bayesian Inference shown inFigure 8.

Figure 8: Bayesian Network for Bed UDM Monitoring

Using the initial sets of data collected, we found that withcareful consideration of the supplemental sensingcapability of each modality during sensor deployment,much higher patient agitation detection rate can beachieved with multi-modalities fusion as shown in Tableand 2.

Table 1: Experimental Results on UDM Agitation inChair Recognition Rate Improvement with Multi-Modalities Sensor Fusion through Bayesian Inferenceshown in Fig. 7.

Table 2: Experiment Results on UDM Agitation in BedRecongnition Rate Improvement with Multi-ModalitiesSensor Fusion through Bayesina Inference shown in Fig8.

We also found that the acoustic modality acts as goodsupplemental evidence to other modalities for agitationdetection as the patient monitored in the experimentshown in Table 2 tends to shout when he exhibits UDMagitation behaviour in bed. However, for the patient whodoes not shout for the experiment shown in Table 3, thedata captured by acoustic sensors give no significantevidence to agitation. In fact, the selection of acousticmodality during fusion greatly hinder the recognition rate.Higher agitation recongnition rate can be achieved with a

customized Bayesian network learned from the trainingdata from that particular patient by removing acousticmodality from the network shown in Figure 8.

With the above results coupled with our ontology andweb service based fusion middleware architecture, a

sample of web-based real time summarized report on

instant of agitation behaviour of a dementia patient for a

single SOAPD session as shown in Figure 9 is generated.

Figure 9: Observation Results for a SOAPD Session

Preliminary results for intervention are also quiteencouraging and indicate that the response can be easilyless than few seconds subjected to network delay which issufficient for most real time intervention. We are now

further validating our results in the hospital deploymentin the preliminary deployment phase scheduled to end inmiddle of year 2006. It is hope that the joint effort with a

local hospital should see us achieving our long termobjective to deploy the system in a real life setting.

5 Conclusions

The monitoring and handling of persons with dementia inhospital and nursing home or even one's home is going tobe increasingly important in the coming years due toaging population. It is important for the patients andcaregivers that automated and non-obtrusive means ofmonitoring and handling procedure be developed. Ourresearch on fusion considerations is a first step in thisdirection. It is expected that with our fusion techniqueand fusion middleware, we will increase the success ratesof detection and enable the technology to be deployedpervasively in hospitals, nursing homes and patients'homes.

Sensor Modality UDM Agitation inBed RecongnitionRate

Ultrasonic sensor alone 85 00FBG pressure sensor alone 81 00Accelerometer sensor alone 82 00Acoustic sensor alone 40 00All Sensor Modalities with 52 00Bayesian Inference

Sensor Modality UDM Agitation inChair RecognitionRate

Ultrasonic sensor alone 59 00FBG pressure sensor alone 75 00Both Sensor Modalities 94 00with Bayesian Inference

Sensor Modality UDM Agitation inBed RecongnitionRate

Ultrasonic sensor alone 83 00FBG pressure sensor alone 83 00Accelerometer sensor alone 84 00Acoustic sensor alone 80 00All Sensor Modalities with 88 00Bayesian Inference

Page 7: [IEEE 2006 9th International Conference on Information Fusion - Florence (2006.07.10-2006.07.13)] 2006 9th International Conference on Information Fusion - Fusion Considerations in

References

[1] G. Abowd, et al, The Aware Home. A LivingLaboratory for Technologies for Successful Aging,Proceedings of the AAAI Workshop and Automation,2002[2] K. Lorincz, et al., Sensor Networks for EmergencyResponse. Challenges and Opportunities, PervasiveComputing, October-December 2004[3] J. Bardram, Applications of context-aware computingin hospital work - examples and design principles,Proceedings of the ACM Symposium on AppliedComputing, 2004[4] H. Christensen, Using logic programming to detectactivities in pervasive healthcare, InternationalConference on Logic Programming, August 2002[5] A.C. Hurley, et al., Measurement of ObservedAgitation in Patients with Dementia of the AlzhemerType, Journal of Mental Health and Aging, 1999[6] N.J. Nilsson, Learning and Acting with Bayes Nets,Artificial Intelligence: A New Synthesis, 1998[7] Xiaohang Wang, et al., Semantic Space. AnInfrastructure for Smart Spaces, Pervasive Computing,July-September 2004[8] Victor Foo Siang Fook, et al., An ontology-basedContext Model in Monitoring and Handling AgitationBehaviourfor Persons with Dementia, Ubicare Workshopfor IEEE International Conference on Pervasive andCommunications, 2006[9] DIG, http.//dl.kr.org/dig/[10] RACER, http.//www.racer-systems.com/[ 11 ] NTP, http.//www. ntp. orgl[12] UPnP, http.//www.upnp.org/[13] Sesame, http.//www.openrdf orgl