articolo2

9
7/23/2019 articolo2 http://slidepdf.com/reader/full/articolo2 1/9 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006 761 Design and Test Issues of an FPGA Based Data Acquisition System for Medical Imaging Using PEM Carlos Leong, Pedro Bento, Pedro Lousã, João Nobre, Joel Rego, Pedro Rodrigues, José C. Silva, Isabel C. Teixeira, J. Paulo Teixeira, Andreia Trindade, and João Varela  Abstract—The main aspects of the design and test (D&T) of a reconfigurable architecture for the Data Acquisition Electronics (DAE) system of the Clear-PEM detector are presented in this paper. The application focuses medical imaging using a compact PEM (Positron Emission Mammography) detector with 12288 channels, targeting high sensitivity and spatial resolution. The DAE system processes data frames that come from a front-end (FE) electronics, identifies the relevant data and transfers it to a PC for image processing. The design is supported in a novel D&T methodology, in which hierarchy, modularity and parallelism are extensively exploited to improve design and testability features. Parameterization has also been used to improve design flexibility. Nominal frequency is 100 MHz. The DAE must respond to a data acquisition rate of 1 million relevant events (coincidences) per second, under a total single photon background rate in the detector of 10 MHz. Trigger and data acquisition logic is imple- mented in eight 4-million, one 2-million and one 1-million gate FPGAs (Xilinx Virtex II). Functional Built-In Self Test (BIST) and Debug features are incorporated in the design to allow on-board FPGA testing and self-testing during product lifetime.  Index Terms—Functional built-in self test, hierarchy, modu- larity,parallelism,parameterization,pipelining,processdiagrams, re-use. I. INTRODUCTION B REAST cancer early detection is recognized as a world- wide priority, since it constitutes the most effective way to deal with this illness. Nevertheless, the detection specificity of present diagnosis systems is low [1]. Therefore, research on new diagnosis processes and systems for this type of cancer are actively pursued. Positron Emission Tomography (PET) based technology is one of these promising research lines. PET technology is used in the development of the Clear-PEM scanner, a high-resolution Positron Emission Mammography (PEM) system, capable of detecting tumors with diameters Manuscript received June 19, 2005; revised March 30, 2006. This work was supported in part by AdI (Innovation Agency) and POSI (Operational Program for Information Society), Portugal. P. Rodrigues and A. Trindade were sup- ported by the FCT under Grant SFRH/BD/10187/2002 and Grant SFRH/BD/ 10198/2002. C. Leong and P. Bento are with INESC-ID, Lisboa, Portugal. P. Lousã, J. Nobre, and J. Rego are with INOV, Lisboa, Portugal. P. Rodrigues and A. Trindade are with the Laboratório de Instrumentação e Física de Partículas, Lisboa, Portugal. J. C. Silva is with the Laboratório de Instrumentação e Física de Partículas, Lisboa, Portugal, and also with CERN, Geneva, Switzerland. I.C.TeixeiraandJ. P.TeixeiraarewithINESC-ID,Lisboa,Portugal,andalso with the Instituto Superior Técnico, Universidade Técnica de Lisboa, Portugal. J. Varela is with the Laboratório de Instrumentação e Física de Partículas, Lisboa, Portugal, and also with CERN, Geneva, Switzerland, and the Instituto Superior Técnico, Universidade Técnica de Lisboa, Portugal. Digital Object Identifier 10.1109/TNS.2006.874841 down to 2 mm [1]–[5]. Based on the detection of radiation emitted by human cells when a radioactive substance is injected into the human blood stream [3], PET identifies, by image reconstruction, the spatial origin of the radiation source (the cancerous cells). Image reconstruction algorithms demand millions of pixels for providing acceptable accuracy. Hence, for a correct med- ical diagnosis, huge amount of data must be generated and pro- cessed. The purpose of this paper is to present key aspects of a novel design and test methodology for high data-volume, data stream digital systems and to apply it to the development of the Data Acquisition Electronic (DAE) system responsible for the digital data processing in the Clear-PEM scanner. Along with innovative high resolution PEM technology, new physicaldata,algorithmsandmethodologiesareunderintensive research. Therefore, hardware/software solutions using recon- figurable hardware (i.e., FPGA-based) constitute an adequate choice. Additionally, reconfigurable hardwaresolutions arealso adequate for the volume production of the envisaged product. The main design challenge in this context is the need to process huge amounts of data [4] and to perform tumor cell identification (if resident in the patient tissues) in the shortest time possible. We refer this as the  medical diagnosis  process. These constraints demand an efficient electronic system, which means, hardware data processing and extensive use of par- allelism and pipelining. In order to meet the functional and performance requirements, moderate speed and high pin count complex FPGA should be used (For the design in which the novel methodology is implemented, Xilinx Virtex II devices have been used). The paper is organized as follows. In Section II, a brief de- scription of the Clear-PEM detector system architecture is pre- sented. Section III presents the main aspects of the proposed methodology, including key functional, performance and testa- bilityissues.InSectionIV,DAEimplementationdetailsarepro- vided. Design validation and prototype verification procedures are presented in Section V. Finally, Section VI summarizes the main conclusions of this work. II. CLEAR-PEM DETECTOR SYSTEM The Clear-PEM detector system is a PET camera for breast imaging designed to optimize the detection sensitivity and spatial resolution [1], [2]. It consists of two parallel detector heads, corresponding to a total of 12288 readout channels. The system is designed to support a data acquisition rate of 1 million events per second, under a total single photon background rate of 10 MHz [2]. An  event  or  hit  (  photoelectric event or Compton—according to the associated energy) is 0018-9499/$20.00 © 2006 IEEE

Upload: valerio-langella

Post on 17-Feb-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 1/9

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006 761

Design and Test Issues of an FPGA Based DataAcquisition System for Medical Imaging Using PEM

Carlos Leong, Pedro Bento, Pedro Lousã, João Nobre, Joel Rego, Pedro Rodrigues, José C. Silva,Isabel C. Teixeira, J. Paulo Teixeira, Andreia Trindade, and João Varela

 Abstract—The main aspects of the design and test (D&T) of areconfigurable architecture for the Data Acquisition Electronics(DAE) system of the Clear-PEM detector are presented in thispaper. The application focuses medical imaging using a compactPEM (Positron Emission Mammography) detector with 12288channels, targeting high sensitivity and spatial resolution. TheDAE system processes data frames that come from a front-end(FE) electronics, identifies the relevant data and transfers it to aPC for image processing. The design is supported in a novel D&Tmethodology, in which hierarchy, modularity and parallelism areextensively exploited to improve design and testability features.Parameterization has also been used to improve design flexibility.

Nominal frequency is 100 MHz. The DAE must respond to adata acquisition rate of 1 million relevant events (coincidences)per second, under a total single photon background rate in thedetector of 10 MHz. Trigger and data acquisition logic is imple-mented in eight 4-million, one 2-million and one 1-million gateFPGAs (Xilinx Virtex II). Functional Built-In Self Test (BIST) andDebug features are incorporated in the design to allow on-boardFPGA testing and self-testing during product lifetime.

 Index Terms—Functional built-in self test, hierarchy, modu-larity, parallelism, parameterization, pipelining, processdiagrams,re-use.

I. INTRODUCTION

BREAST cancer early detection is recognized as a world-

wide priority, since it constitutes the most effective way

to deal with this illness. Nevertheless, the detection specificity

of present diagnosis systems is low [1]. Therefore, research on

new diagnosis processes and systems for this type of cancer

are actively pursued. Positron Emission Tomography (PET)

based technology is one of these promising research lines.

PET technology is used in the development of the Clear-PEM

scanner, a high-resolution Positron Emission Mammography

(PEM) system, capable of detecting tumors with diameters

Manuscript received June 19, 2005; revised March 30, 2006. This work wassupported in part by AdI (Innovation Agency) and POSI (Operational Programfor Information Society), Portugal. P. Rodrigues and A. Trindade were sup-ported by the FCT under Grant SFRH/BD/10187/2002 and Grant SFRH/BD/ 10198/2002.

C. Leong and P. Bento are with INESC-ID, Lisboa, Portugal.P. Lousã, J. Nobre, and J. Rego are with INOV, Lisboa, Portugal.P. Rodrigues and A. Trindade are with the Laboratório de Instrumentação e

Física de Partículas, Lisboa, Portugal.J. C. Silva is with the Laboratório de Instrumentação e Física de Partículas,

Lisboa, Portugal, and also with CERN, Geneva, Switzerland.I. C. Teixeira andJ. P. Teixeira are with INESC-ID, Lisboa, Portugal, andalso

with the Instituto Superior Técnico, Universidade Técnica de Lisboa, Portugal.J. Varela is with the Laboratório de Instrumentação e Física de Partículas,

Lisboa, Portugal, and also with CERN, Geneva, Switzerland, and the InstitutoSuperior Técnico, Universidade Técnica de Lisboa, Portugal.

Digital Object Identifier 10.1109/TNS.2006.874841

down to 2 mm   [1]–[5]. Based on the detection of radiation

emitted by human cells when a radioactive substance is injected

into the human blood stream  [3], PET identifies, by image

reconstruction, the spatial origin of the radiation source (the

cancerous cells).

Image reconstruction algorithms demand millions of pixels

for providing acceptable accuracy. Hence, for a correct med-

ical diagnosis, huge amount of data must be generated and pro-

cessed. The purpose of this paper is to present key aspects of a

novel design and test methodology for high data-volume, data

stream digital systems and to apply it to the development of theData Acquisition Electronic (DAE) system responsible for the

digital data processing in the Clear-PEM scanner.

Along with innovative high resolution PEM technology, new

physical data, algorithms and methodologies are under intensive

research. Therefore, hardware/software solutions using recon-

figurable hardware (i.e., FPGA-based) constitute an adequate

choice. Additionally, reconfigurable hardware solutions are also

adequate for the volume production of the envisaged product.

The main design challenge in this context is the need to

process huge amounts of data   [4]   and to perform tumor cell

identification (if resident in the patient tissues) in the shortest

time possible. We refer this as the  medical diagnosis  process.These constraints demand an efficient electronic system, which

means, hardware data processing and extensive use of par-

allelism and pipelining. In order to meet the functional and

performance requirements, moderate speed and high pin count

complex FPGA should be used (For the design in which the

novel methodology is implemented, Xilinx Virtex II devices

have been used).

The paper is organized as follows. In  Section II, a brief de-

scription of the Clear-PEM detector system architecture is pre-

sented. Section III  presents the main aspects of the proposed

methodology, including key functional, performance and testa-

bility issues. In Section IV, DAE implementation details are pro-

vided. Design validation and prototype verification proceduresare presented in Section V. Finally, Section VI summarizes the

main conclusions of this work.

II. CLEAR-PEM DETECTOR SYSTEM

The Clear-PEM detector system is a PET camera for breast

imaging designed to optimize the detection sensitivity and

spatial resolution  [1],   [2]. It consists of two parallel detector

heads, corresponding to a total of 12288 readout channels.

The system is designed to support a data acquisition rate

of 1 million events per second, under a total single photon

background rate of 10 MHz [2]. An  event  or  hit  ( photoelectric

event or Compton—according to the associated energy) is

0018-9499/$20.00 © 2006 IEEE

Page 2: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 2/9

762 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 1. Sampled energy pulse associated with a hit.

defined as the interaction of a ray with a crystal. Data to be

analyzed and processed correspond to the energy originated

in the different crystals as a consequence of these collisions.

 Relevant data   are associated with   relevant events. An   event 

is defined as   relevant   if it corresponds to a  coincidence, that

is, the simultaneous occurrence of  hits  in both crystal planes.

In this context, simultaneous means within the same discrete

time window characterized by a pair of parameters -   Time

Tag/Delta. Time Tag (TT) is the time instant associated with

the data sample that corresponds to highest value in the energy

pulse associated with one  hit .  Delta  is the difference between

the time instant associated with the analog energy peak and theTime Tag (sample) (see Fig. 1).

System functionality is partitioned between the Front-End

(FE) electronics (details in [3], [4]) on the detector heads, and

the off-detector DAE. The FE is responsible for the construction

and distribution of the analog data frames that correspond to the

occurrence of hits. This work focus on the novel design and test

methodology applied to the DAE development. The proposed

DAE architecture is described in the following section.

III. DESIGN AND TEST METHODOLOGY

The proposed Design and Test (D&T) methodology targets

high volume, high rate data processing systems for which data

are concurrently delivered to them. Such a system must satisfy

the following objectives: 1)   functional and performance com-

 pliance   to the functional and timing specifications; 2)   testa-

bility and debugging capabilities  to allow functional self-test

and prototype debug and 3)   easily modifiable functionality   to

allow low-cost system design adaptation to requirements mod-

ifications, that may occur, e.g., as the result of algorithms and

calibration procedures refinements.

To meet these objectives, the following system attributes are

pursued in the hardware D&T methodology: hierarchy, modu-

larity, module re-use, parallelism, pipelining and parameteriza-tion.

Cost-effective prototype validation of such a complex system

is a mandatory requirement. This excludes the use of only ex-

ternal test equipment. Therefore, unassigned FPGA resources

should be used, as much as possible, to implement built-in test

structures to support system diagnosis and debug, and self-test

during product lifetime. FPGA devices integrated in the DAE

system are assumed defect-free (they are marketed after produc-

tion test). Hence, FPGA test resourcesare not built-in to perform

the structural test (as is usual in manufacturing), but rather to

carry out functional test. We refer this feature of the proposed

D&T methodology as functional BIST.

The complexity and the specificity of the problem justifiedthe development of a new D&T methodology. The following

aspects have been taken into consideration in the development

of the proposed methodology. Although huge amount of data ar-

rive at the DAE at a relatively high rate, the information flowing

from each channel is identical (since it comes from similar crys-

tals) and should be submitted to identical processing. Therefore,

the electronic system architecture should reflect this character-

istic by exhibiting high replication, or re-use of identical pro-

cessing modules. An important aspect to be addressed is the

choice of the granularity of the modules. Should a module cor-

respond to a single crystal, since a crystal is the source of data,

or should it correspond to some crystal cluster?It has been decided that the DAE architecture should map the

organization of the crystal arrays. Data are provided by 12288

readout channels (two channels/crystal, one for the top and one

for the bottom plane). These channels are organized in 2 96

identical detector modules distributed by two crystal planes.

Data arrive at the DAE in parallel. Thus, it should, has much

as possible, be processed in parallel.

On the other hand, data are transmitted from the FE to the

DAE by a large number of cables that may introduce diverse

delays. However, to guarantee that a detected coincidence is ef-

fectively a coincidence, it is mandatory to guarantee system syn-

chronism.

The random nature of data generation is another aspect thathas been considered. In fact, huge amounts of randomly gener-

Page 3: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 3/9

LEONG et al.: DESIGN AND TEST ISSUES 763

Fig. 2. Top-level model of PEM DAE electronic system.

Fig. 3. Process diagram corresponding to the normal/random mode Scenario.

ated data arrive at the DAE and require processing to determine

if it should be considered as relevant or not. Irrelevant data must

be discarded as quickly as possible.

Therefore, the D&T methodology should lead to a DAE ar-

chitecture that reflects the DAE hierarchy and the modular char-

acter of the scanner, as well as data  flow parallelism using mul-

tiple instantiation (re-use) of identical modules.

In order to take into account the random nature of data al-

location of enough memory banks in the architecture must be

performed to temporarily store data until they can be safely dis-

carded. Moreover, it should consider the physical limitationsimposed on the timing requirements by the interconnection ca-

bles, which demands adequate design techniques to guarantee

synchronism.

In the next sections, the rationale behind the methodology

will emerge in the explanation of DAE architecture.

 A. DAE Architecture

The main functionality of the DAE is, to identify   relevant 

data coming from the FE electronics. Fig. 2 depicts the top-level

architecture of the DAE system. This figure highlights the hier-

archical nature of the design. In fact, the system is composed of 

four identical DAQ boards, each one with two identical FPGAs(DAQ FPGA) that implement the Data Acquisition (DAQ) func-

Page 4: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 4/9

764 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

TABLE I

DATA AND  CONTROL SIGNALS DESCRIPTION

Fig. 4. Mapping processes in modules at architecture level.

tionality. In each DAQ FPGA, system functionality is parti-tioned into DAQ (synchronization and processing) Read-Out

Controller (ROC) and Filter.

Each one of the four DAQ boards maps 48 crystal modules.

Each DAQ FPGA inside each DAQ board processes data corre-

sponding to 24 crystal modules. Modularity and hierarchy are

also present in the design of each module that constitutes the

DAQ FPGA.

The Trigger and Data Concentrator (TGR/DCC) board

houses the TGR/DCC FPGA, which implements the Trigger

and Data Concentration functionality. This FPGA is responsible

for the detection of coincidence occurrences. This functionality

is implemented in module TGR (Fig. 2). Whenever a coinci-

dence is detected, a trigger signal is generated. The presenceof this signal indicates to the DAQ that the corresponding

data must be made available to the DCC module. DCCROCmodule is responsible for data organization according to the

communication protocols. DBArb module is the arbiter of the

Dedicated Bus.

The TGR/DCC board also houses the PCI FPGA that imple-

ments the controller of the PCI Bus, which is responsible for the

communication between the DAE and the external PC. Within

each FPGA a test module represents all the built-in test struc-

tures that are used for functional test and for prototype debug.

The test structure is also modular. In the  figure, a single Test

Module represents the test structure. However, test structures

are associated with each one of the different functional modules.

For debug purposes, dedicated connectors (CON) are available

at the different boards, to allow accessibility to and from the testequipment.

Page 5: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 5/9

LEONG et al.: DESIGN AND TEST ISSUES 765

In   Fig. 2, LVDS stands for Low Voltage Differential Sig-

naling, which is a low noise, low power, low amplitude method

for high-speed (gigabits per second) data transmission over

copper wire.

Two proprietary buses, the Generic Bus and the Dedicated

Bus, are responsible for the fast information   flow within the

DAE system.

 B. Process Diagrams

In the proposed D&T methodology, system functionality is

partitioned into sub-functions, in a hierarchical way to satisfy

design, testability and diagnostic requirements.

The partitioning procedure is based on the characterization of 

data and data streams, as well as on the processes that transform

data.

Process Diagrams, PD, (e.g.,   Fig. 3) are used to describe

data storage, processing and data and control   flow.   Process

 Diagrams eases problem characterization and modeling. This

procedure has been adapted from the software domain   [7],

[8]  where this kind of modeling is used as a  “thinking tool”

to characterize the problem under analysis, as completely as

possible, prior to initiate system design.

In  Fig. 3,  ellipses  represent   processes, rectangles represent

external objects that dialog with the processes and  arrows rep-

resent information fl ow.

A process isdefined as a set of functions thatcarry out a given

functionality. Each ellipse conveys the process name and the

number of instances of that process (e.g., x4 means that there

are 4 instances of this module in the architecture). Each process

can be instantiated more than once. For instance, DAQ Sync isinstantiated 4 times. By doing so, modularity, reuse and  paral-

lelism are highlighted.

Each arrow conveys data and control signal information. Dif-

ferent types of arrows represent different types of information.

In this particular case, distinction is made between functional

operation mode (dot lines) and test mode (dash and continuous

lines).

In test mode, distinction is also pointed out between data orig-

inated by the test modules, that is, the test vectors (dash lines)

and the modules response to the test vectors, that is, modules

signatures (continuous lines).

In a good design, Process Diagrams

 should present low con-

nectivity, that is, processes should be designed so that its asso-

ciated functionality should be executed as independently from

the other processes as possible. This eases the implementation

of hierarchy and parallelism in the design structures.

Another aspect that is contemplated in the  Process Diagrams

is the time variable. In fact, although it does not appear explicitly

in the diagrams, it is conveyed in the control signal that, together

with data, defines the flow of information between processes.

To guarantee, as much as possible, the completeness of the

functional description the concept of  operational scenario is in-

troduced. In this context, a  scenario is defined as the set of pro-

cesses and corresponding data and control flow that represents

the complete execution of the functionality in a given operationmode.

Fig. 5. Complete FPGA Test Procedure.

Scenario identification is indicated by the index in fi. As an

example, f1.T15 means the test flow T15 in scenario 1. If nec-essary, additional meaning can be associated with the remnant

indexes (indicating, e.g., the source and the target modules).

For the DAE, five operational scenarios have been identified,

namely, 1) normal/random mode, 2) single mode (for calibra-

tion), 3) constant loading parameters (for calibration), 4) func-

tion mode loading and 5) error request. For the sake of complete-

ness, all different scenarios that correspond to the DAE opera-

tion modes must be described in terms of  Process Diagrams.

As an example, Fig. 3 depicts the   Process Diagram (PD) of 

the DAE normal/random mode  scenario. In this diagram, pro-

cesses corresponding to functional BIST structures are already

included (test process). As shown, data and control signal are theinputs and outputs of the transforming processes. Each process

can be further decomposed into sub-processes, and described in

more detailed PDs that correspond to lower hierarchical levels.

In this way,  hierarchy  emerges.

Table I provides some examples of the data and control signal

of the PD described in Fig. 3. Although, in the software domain,

PD typically describes static  flow and processing of data, their

reuse in the context of our methodology take dynamic features

in consideration.

C. Mapping Processes Into Design Modules

At the design level, hopefully, each process should corre-spond to a hardware module, or sub-module. In Fig. 4 the corre-

spondence between processes in the Process Diagram and mod-

ules in the FPGA architecture is shown.

Design documentation (and test planning) requires the spec-

ification of data and control signal identified in the Process Di-

agrams, according to their format and timing requirements.

Lastly, all these modules are designed so they can be con-

figured off and/or online, a very useful feature for a prototype

validation.

 D. Performance Issues

Taking into account that the main objective of the system isthe identification of  coincidences, it is easy to understand that

Page 6: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 6/9

766 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 6. Test structure for processes 2 and 3.

synchronism  is a critical issue in this system (de-synchroniza-

tion may mainly be due to the long and diverse length of the

interconnection cables). In fact, if synchronism is lost, data be-

come meaningless. To guarantee synchronism in key parts of the circuit, where delays associated to previous processing or

data paths could be variable, self-adjusted pipeline structures

are used. The later is because the data come through an asyn-

chronous bus, so its data are scrambled in the time domain,

which must be de-scrambled before processing. The first is for

auto adaptation of the cable length (cable delay).

Moreover, it is necessary to guarantee the working frequency

of 100 MHz. To achieve this purpose, registers are inserted

among modules whenever it is required. The modular char-

acter of the design significantly simplifies this procedure. As

referred, identical modules are used in parallel processing

mode. Different modules can work at different frequencies.

Synchronous and/or asynchronous FIFOs are used to guaranteethe correct data transfer between modules. With this generic

approach, implementing functional BIST structures is equiva-

lent to implementing any other functionality.

 E. Testability Issues

DAE testing [8] is carried out in order to insure: 1) design and

prototype validation, diagnosis and debug and 2) lifetime self-

test. This may be carried out at component, board and system

level. As mentioned before, the complexity of the system would

make its functional test extremely complex, if based on the use

of external equipment only. Therefore, a test resource parti-

tioning strategy has been adopted. Almost all the DAE test pro-

cedures are embedded in the FPGA design with negligible over-

head: unused Silicon area and limited speed degradation. The

implemented functional BIST structures support both above-

mentioned objectives [9].

The functional built-in test modules in the different FPGAaim at: 1) the verification of the correctness of the DAE system

Page 7: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 7/9

LEONG et al.: DESIGN AND TEST ISSUES 767

Fig. 7. Crate Overview.

TABLE IIALLOCATED  RESOURCES

functionality and performance, and 2) the diagnosis and debug

of the DAE system, or subsystems.Moreover, not only DAE system functionality must be cor-

rectly implemented, but also timing requirements must be met.

Therefore, Functional Test  and Performance Test  are carried out

for the different system operating modes, or scenarios.

In Fig. 5 the FPGA test procedure is depicted. As can be ob-

served, for each scenario each FPGA is completely tested using

two working frequencies. First the system is tested at half speed.

If everything works according to specifications, then the func-

tionality is correct (Functional Test). Afterwards, the system

is again tested at nominal speed (Performance Test). If errors

occur, it is possible to conclude that these are timing errors.

At each step, and for all scenarios, test may be carried out

at different hierarchical levels, targeting components, modules,boards or system.

An example of a test structure is presented in  Fig. 6  corre-

sponding to processes 2 and 3 in Fig. 3. As shown, a set of testbenches, TB1, TB2, and Null TB is applied to the processes to

be tested. Comparators are used to validate the module outputs

by comparison with the expected signature. These test benches

and expected outputs are generated by the Geant4 Monte Carlo

simulation toolkit and DIGITsim DAQ Simulator [2] and stored

in ROM blocks within the FPGAs.

Testing is carried out in two steps, one  non-deterministic and

one deterministic. The non-deterministic test will verify that all

duplicated modules and blocks have identical response for the

same input vectors, which include Monte Carlo digitized data

frames. The deterministic test will verify that the functionality,

namely the evaluation of the two key values (Delta/Time Tag

and Energy) and samples [4]  are correct on, at least, one com-plete signal path. The deterministic test will also verify that the

Page 8: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 8/9

768 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 8. DAQ board.

filter master block functionality (communication with trigger

and the other DAQ FPGA) is correct. Test outputs are the cor-

responding signatures. Functional BIST structures have been

implemented without significant degradation of system perfor-

mance.

IV. IMPLEMENTATION DETAILS

The PEM DAE system is implemented in a set of boards

housed in a 6U Compact PCI crate (see Fig. 7). A generic and a

dedicated bus are used for data exchange among boards.

The data acquisition reconfigurable logic is implemented

in large FPGAs with four million gates (Xilinx Virtex II

xc2v4000-4bf957) (eight DAQ FPGA). Another FPGA (Xilinx

Virtex II xc2v2000-4bg575), with two million gates imple-

ments the TGR/DCC module. A third FPGA (Xilinx Virtex II

xc2v1000-4bg575), with one million gates implements the PCI

Core.

Table II indicates the allocation resources on the DAQ, and

on the TGR/DCC FPGA (prior to functional BIST). Using stan-

dard routing effort, the design managed a register-to-register

delay of 9.348 ns, which corresponds to a clock frequency of 

107 MHz. Speed degradation due to BIST insertion is minimal

(107 to 104 MHz-less than 5%).

In Fig. 8, the actual DAQ board, a twelve-layer board, can be

seen. The Bus connectors as well as the main components of the

board (LVDS, transceivers and FPGAs) are pointed out.

V. DESIGN VERIFICATION AND PROTOTYPE VALIDATION

Detailed simulations of the Clear-PEM detector and trigger

system, using the Geant4 Monte Carlo simulation toolkit and a

high-level C simulation of the data acquisition system [10]

have been carried out to produce realistic datasets that have

been used to study the FPGA design, assess the hardware im-

plementation and evaluate the influence of the data acquisition

system on the reconstructed images. For these tasks, a simula-

tion framework has been implemented. Details of this frame-

work are provided in [2], [4].

Test vectors generated by the simulation framework have

been used for VHDL design validation. The following strategyhas been followed: events produced by the Geant4 based

Fig. 9. Design and validation flow.

modules (Phantom Factory/PEMSim)  [4], [11]  modules were

interfaced with DIGITSim and a list of digitized data frameswas obtained (two for each hit). Each data frame corresponds

to the information sent by the front-end system and contains

ten samples plus   “Monte Carlo truth”   variables: energy and

phase. This same list of samples has been used as stimuli to the

VHDL test bench (compiled and synthesized by ISE Project

Navigator 6.2.03i and simulated by ModelSim XE II 5.7 g) and

DIGITSim DAQ Simulator. Results obtained with the VHDL

and DIGITSim descriptions of the PEM system are coincident.

In Fig. 9, the overall design and validation data flow diagram

is represented. Three main steps are highlighted in this  figure,

namely, system level simulation carried out by the Geant4 Monte

Carlo simulation toolkit using a C model, design validation

using the VHDL description and the Xilinx ISE/ModelSim and

finally, prototype validation which is carried out using some test

equipment and the functional BIST structures.

The  first two steps take place during the design phase, al-

though, as mentioned before, test benches used for prototype

validation and lifetime self-test are generated at phase one.

As indicated in the figure, some FPGA reconfiguration may

be required during the prototype validation phase. Also indi-

cated in the  figure is the re-use of Functional BIST structures

for lifetime test. This test is carried out at power up and by user

request, using a software command.

VI. CONCLUSION

A design and test methodology for the design of the DAE of 

the Clear-PEM scanner has been presented. Underlying princi-

ples of the D&T methodology are the extensive use of hierarchy,

modularity, re-use, pipelining, parallelism and parameterization

in hardware implementation. Using these attributes facilitate the

design process, and design and prototype functionality and per-

formance validation. Parameterization leads to more flexible de-

signs, allowing the introduction of late modifications in system

requirements without significant re-design effort.

Functional BIST structures, embedded in the FPGA com-

ponents, allow prototype debug (significantly reducing the

complexity and costs of test equipment) and lifetime self test.These test structures have been implemented without significant

Page 9: articolo2

7/23/2019 articolo2

http://slidepdf.com/reader/full/articolo2 9/9

LEONG et al.: DESIGN AND TEST ISSUES 769

degradation of system performance (less than 5%, for the DAQ

FPGA), although they occupy (in the case of the DAQ FPGA)

around of the FPGA resources.

In the future, refined algorithms will be implemented for co-

incidence detection and testability will be revised.

REFERENCES

[1] P. Lecoq andJ. Varela, “Clear-PEM, A dedicatedPET camerafor mam-mography,” Nucl. Instrum. Meth. A, vol. 486, pp. 1–6, 2002, 2002.

[2] A. Trindade,   “Design and evaluation of the clear-PEM scanner forpositron emission mammography,”   IEEE Trans. Nucl. Sci., t o b epublished.

[3] J. Varela, “Electronics and data acquisition in radiation detectors formedical imaging,” Nucl. Instrum. Meth. A, vol. 527, pp. 21–26, 2004.

[4] P. Bento, “Architecture and first prototype tests of the clear-PEM elec-tronics systems,” in  IEEE MIC , Rome, Italy, 2004.

[5] N. Matela, “System matrix for clear-PEM using ART and linograms,”in IEEE MIC , Rome, Italy, 2004.

[6]   OMG-Uni fied Modeling Language, v1.5, Rational, 2003.[7] Bran Selic1 and Jim Rumbaugh2, “Using UML for modeling complex

real-time systems,” 1 Realtime, 2 Rational, 1998.[8] G. Hetherington, T. Fryars, N. Tamarapalli, M. Kassab, A. Hassan, and

J. Rajski, “Logic BIST forlarge industrial designs: Real issuesand casestudies,” in  Proc. IEEE Int. Test Conf., 1999, pp. 358–367.[9] P. Bento, C. Leong, I. C. Teixeira, J. P. Teixeira, and J. Varela, Testa-

bility and DfT/DfD Issues of the DAE System for PEM, Tech. Rep.Jan. 2005, version 3.1.

[10] S. Agostinelli, “GEANT4—A simulationtoolkit,” Nucl. Instrum. Meth.

 A, vol. 506, pp. 250–303, 2003, 1995.[11] P. Rodrigues,   “Geant4 applications and developments for medical

physics experiments,”   IEEE Trans. Nucl. Sci., vol. 51, no. 4, pp.1412–1419, Aug. 2004.