journal in electronics and communication

Upload: prabha-karan

Post on 03-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 journal in electronics and communication

    1/17

    IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009 23

    Three-Dimensional Ultrasound: From Acquisition toVisualization and From Algorithms to Systems

    Kerem Karadayi, Student Member, IEEE, Ravi Managuli, Member, IEEE, and Yongmin Kim, Fellow, IEEE

    Methodological Review

    AbstractOne of the key additions to clinical ultrasound(US) systems during the last decade was the incorporation ofthree-dimensional (3-D) imaging as a native mode. Comparedto previous-generation 3-D US imaging systems, todays systemsoffer easier volume acquisition and deliver superior image qualitywith various visualization options. This has come as a resultof many technological advances and innovations in transducerdesign, electronics, computer architecture, and algorithms. Whilefreehand 3-D US techniques continue to be used, mechanically

    scanned and/or two-dimensional (2-D) matrix-array transducersare increasingly adopted, enabling higher volume rates andeasier acquisition. More powerful computing engines with in-struction-level and data-level parallelism and high-speed memoryaccess support new and improved 3-D visualization capabilities.Many clinical US systems today have a 3-D option that offersinteractive acquisition and display. In this paper, we cover theinnovations of the last decade that have enabled the current 3-DUS systems from acquisition to visualization, with emphasis ontransducers, algorithms, and computation.

    Index Terms3-D ultrasound, 4-D ultrasound, algorithms, sys-tems, transducers, volume rendering.

    I. INTRODUCTION

    SINCE its advent more than half a century ago, diag-nostic ultrasound (US) imaging has undergone significant

    transformations. It started out with one-dimensional (1-D)amplitude mode (A-mode), in which tissue structures onlyalong a single scanline could be depicted or tracked over time(motion mode or M-mode), but transitioned over to two-dimen-sional (2-D) imaging with the introduction of brightness mode(B-mode). Many other advances followed in the 1960s and1970s, including continuous-wave [1] and pulsed Doppler US[2] and real-time B-mode US [3]. Color Doppler imaging wasintroduced in the 1980s [4], enabling real-time visualization

    of blood flow in 2-D. Digital beamforming instead of analogbeamforming [5] came along in the 1990s, completing thedigitalization of ultrasound machines from scan conversionto beamforming. These have led to substantial improvementsin image quality and diagnostic utility, establishing 2-D US

    Manuscript received July 01, 2009. Current version published December 01,2009.

    K. Karadayi and Y. Kim are with the Departments of Electrical Engineeringand Bioengineering, University of Washington, Seattle, WA 98195 USA(e-mail: [email protected]; [email protected]).

    R. Managuli is with the Department of Bioengineering, University of Wash-ington, Seattle, 98195 USA and with Hitachi Medical Systems of America,Twinsburg, OH 44087 USA (e-mail: [email protected]).

    Digital Object Identifier 10.1109/RBME.2009.2034132

    imaging as a routine clinical exam preferred for its safety,cost-effectiveness, portability, and interactive visualization.

    Similar to 2-D US, the concept of three-dimensional (3-D)US was first demonstrated in the 1950s [6] and has long beenproposed to overcome the limitations inherent in 2-D imagingof 3-D anatomy. Such limitations include the difficulty in ana-lyzing structures lying in planes other than the original planes ofacquisition, estimating volumes from 2-D-only measurements

    with geometrical assumptions, and challenges in obtaining thesame views in longitudinal studies. Over the course of 3-D USdevelopment, various approaches were used in acquiring USvolume data. For example, Howry et al. [6] collected volumedata from objects embedded in water baths using a mechan-ical assembly that translated a single-element transducer up anddown while the transducer is oscillating sideways to acquire 2-Dcross-sections at varying heights. Brinkley et al. [7] used a free-hand approach, in which a mechanically scanning 2-D imagingprobe was swept manually in a direction perpendicular to theimaging plane of the probe to acquire a series of arbitrary 2-Dscans covering a 3-D region. They placed spark gaps on the

    US probe as sound sources to acoustically determine the posi-tion/orientation of the probe with six degrees of freedom, whichwas later used to register each 2-D scan plane into a 3-D Carte-sian volume.

    A lot of research in the 1980s and 1990s went into improvingacquisition and visualization as well as demonstrating the clin-ical utility of 3-D US imaging. Freehand techniques during thistime utilized tracking via magnetic [8], optical sensors [9], ormechanically articulated arms [10]. Also, sensorless methodswere used in applications where geometric accuracy was notdeemed critical. Speckle decorrelation between consecutiveframes was also proposed as a way to track probe motion

    without a position sensor [11]. Freehand techniques becamepopular because they were relatively inexpensive to implementand allowed unconstrained scan geometries (user-defined vol-umes). Dedicated 3-D probes were introduced in the 1990s,with the two prevailing technologies of mechanical probes[12] and 2-D matrix-array transducers [13]. However, 3-D USremained mostly confined to research settings in the 1990sdue to tedious and lengthy acquisition and reconstruction,suboptimal image quality, and lack of a clearly demonstratedadded diagnostic value [14].

    More innovations in transducer design, electronics, computerarchitecture, and algorithms occurred in the last ten years. Me-

    chanical 3-D probes became more compact and faster, and 2-D1937-3333/$26.00 2009 IEEE

  • 7/28/2019 journal in electronics and communication

    2/17

    24 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    Fig. 1. Fetus 34 weeks old with bilateral cleft lip visualized using (a) conventional 2-D US imaging and (b) 3-D US volume rendering. It is very difficult tovisualize bilateral cleft lip in conventional 2-D image, but very easy to diagnose using 3-D US.

    Fig. 2. MPR and volume-rendered image of a uterus. Arcuate uterus (indentation on the superior aspect of the endometrial cavity) can be clearly visualized in thecoronal (lower left) and the volume-rendered (lower right) images. This view is very difficult to obtain in the conventional longitudinal (upper left) and transverse(upper right) views.

    matrix-array transducers evolved from sparse arrays to fullysampled arrays to deliver higher image quality. Advances incomputer architecture and semiconductor technology enabledmore powerful processors that support instruction-level anddata-level parallelism and high-performance input/output (I/O)capabilities. As a result, it became possible to support new andbetter 3-D visualization algorithms, reconstruct volumes asthey are acquired, and provide visualization at interactive rates.These advances considerably eased some of the technical chal-lenges experienced by previous-generation 3-D US systems.Current-generation 3-D systems deliver superior image qualitywith better visualization and offer easier acquisition with

    significantly reduced scan times. Four-dimensional (4-D) (3-Dtime) imaging is now possible on certain systems because

    of the ability to capture and process 3-D volumes fast enough(e.g., 20 volumes/s) to visualize 3-D anatomy with its motion.

    As a result of these developments, a new wave of clinicalresearch in 3-D/4-D US was initiated. The originally claimedbenefits for going to 3-D imaging are now being verified, andnew uses are being demonstrated. For example, in obstetrics,3-D views were shown to be superior to 2-D US in evaluatingcertain fetal anomalies, such as a bilateral cleft lip (Fig. 1), acleft secondary palate [15], [16], or in the assessment of fetalribs [17], [18]. In gynecology, 3-D US was shown to be helpfulin assessing congenital uterine anomalies [19][21], such as anarcuate uterus (Fig. 2). In echocardiography, the ability to ob-

    tain en face (forward facing) views of the mitral valve from theleft atrium, which was not possible prior to using 3-D US, has

  • 7/28/2019 journal in electronics and communication

    3/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 25

    Fig. 3. Transducer technologies used for 3-D US acquisition: (a) mechanical 3-D probes, (b) 2-D matrix-array transducers, and (c) freehand 3-D acquisition usinga conventional 1-D array with position sensor.

    been found to facilitate the assessment of valvular disease [22].Also, workflow improvements have been reported as a result ofreduced scan times because one volume acquisition can replace

    several planar acquisitions. One application that was shown tobenefit from the reduced scan time is exercise stress echocardio-graphy, during which measurements from multiple scan planesneed to be made in a relatively short amount of time followingthe exercise to be as close to peak stress as possible [23]. In suchexamination, availability of full volume datasets wasalso shownto facilitate better alignment of the same views for baseline andpeak-stress measurements.

    There are quite a few review papers on recent clinical expe-rience with 3-D/4-D US [22], [24][39]. Also, several reviewpapers on the technical aspects of 3-D US were published from1996 to 2001 [40][44]. In this paper, we focus on the recent

    technological developments that have taken place in volume ac-quisition and visualization, including transducers, algorithms,and computation. We also discuss the remaining challenges tobe overcome in 3-D US and the possible future directions of 3-DUS technology.

    II. VOLUME ACQUISITION

    The development and maturation of dedicated 3-D probes hasplayed a major role in enabling easier and faster acquisition ofvolume data. These include mechanical 3-D probes and 2-D ma-trix-array transducers. In addition, freehand techniqueshave im-

    proved, and they continue to provide a lower cost alternative todedicated 3-D probes.

    A. Mechanical 3-D Probes

    A modern mechanical 3-D probe consists of a 1-D array trans-ducer and a compact motor coupled together and placed inside

    the probe housing [Fig. 3(a)]. The motor translates, rotates, orwobbles the 1-D transducer back and forth to insonate a 3-Dvolume of interest. The constrained scan geometry of a mechan-ical probe makes registrationof theacquired 2-Dimagessimplerbecause the position and/or orientation of the transducer can bedetermined readily. Early generation mechanical probes wereslow (e.g., it took 4 s to acquire a 40 volume sector in 1993[45]). They have evolved throughout the last decade to becomemore compact and faster, delivering volume acquisition ratestypically several volumes/s or higher depending on the sectorsize. This not only has resulted in a reduction of artifacts due totissue/patient motion but also has enabled fast enough acquisi-

    tion of volume datasets to enable interactive visualization.Systems based on mechanical probes are cheaper to supportthan those based on 2-D matrix-array transducers because theyuse a 1-D array transducer and the beamforming requirementsare same as those used in conventional 2-D imaging. They alsoprovide comparable image quality in the acquisition plane tothat delivered by the conventional 2-D imaging. As a result,mechanical 3-D probes are increasingly being adopted for 3-DUS volume acquisition, particularly in OB/GYN applications.Their use in cardiology applications, however, has been limiteddue to low volume acquisition rates. Also, since acquisitions areperformed while the transducer is in motion and color/powerDoppler imaging requires an ensemble of echoes (typically )

    to be obtained along the same direction for each scanline, theirvolume rates are severely restricted in Doppler modes.

  • 7/28/2019 journal in electronics and communication

    4/17

    26 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    B. Two-Dimensional Matrix-Array Transducers

    Two-dimensional matrix-array transducers were first in-troduced in the early 1990s [13]. They steer an ultrasoundbeam electronically in two perpendicular directions (azimuthand elevation) to insonate a typically pyramid-shaped volume[Fig. 3(b)]. No moving parts are involved, and parallel receive

    beamforming techniques can be utilized in both lateral direc-tions (azimuth and elevation) to acquire multiple scanlines (e.g.,4 4) for each transmitted ultrasound beam [46]. Therefore,they can achieve higher volume rates than mechanical probes.While high volume rates come at the expense of lowered imagequality due to broader transmitted beams and artifacts arisingfrom multiple scanline formation per transmitted beam [47],3-D echocardiography has considerably benefited from theavailability of 2-D arrays.

    On the other hand, the construction of 2-D matrix-array trans-ducers and supporting them on 3-D US systems involve manychallenges [48]. In order to be practically usable in echocardiog-raphy, the footprints of 2-D array transducers need to be similarto those of conventional 1-D array transducers to allow inter-costal views of the heart without rib shadows. This implies thata large number of 2-D array elements need to be fitted in thesame area as that of a 1-D array transducer (e.g., 64 64 ele-ments instead of only 256 elements). The smaller element sizeand interspacing present difficulties in electrical connection ofeach piezoelectric crystal element within the transducer head.Smaller element size also results in lowered capacitance andhence increased electrical impedance mismatch between the el-ement and the coaxial cable that interconnects the element tothe US system [49], [50]. Many preamplifiers and matching cir-cuitries in the probe housing are used to overcome the poorer

    transmission efficiency and improve the signal-to-noise ratio(SNR), which leads to bulkier probes.

    Another challenge involving 2-D arrays is 3-D beamforming.Because focusing and steering are performed in both azimuthaland elevational planes, a 2-D set of delays is required. This sub-stantially increases the required number of channels over whatwould be required for a 1-D array. For example, to achieve thesimilar beamforming quality as in a 1-D array using 64 chan-nels, 4096 channels would be necessary. This not only makesbeamforming computationally very expensive but also is pro-hibitive in that the number of connections needed between theprobe and the US machine is practically limited by the size and

    weight of the interconnecting probe cable.All these challenges led the earlier 2-D array transducers tocontain a smaller number of elements, (e.g., 32 32 instead of64 64), with only a subset of elements being active at thesame time (i.e., sparse arrays) to overcome the beamformingcomplexity. Since only a smaller portion of the transducer aper-ture is used to transmit and receive, focusing was compromised,resulting in suboptimal beam shapes with large sidelobes [51].Smaller apertures also lowered the sensitivity further. Conse-quently, the 2-D image quality achievable using the first-gen-eration 2-D array transducers was considerably inferior to thattypically provided by the conventional 1-D array transducers.

    It took another decade (i.e., the early 2000s) for fully sam-

    pled arrays to become practical [52]. In these, a subarray beam-forming approach was taken, where all elements were used to

    transmit and receive but beamforming was split into two stages:fine delays and summation between the signals received by im-mediate-neighboring elements were implemented on compactanalog electronics placed inside the transducer head, whereaslarger delays and final summations were implemented digitallyin the main beamformer unit inside the US machine. This keeps

    the number of active channels and connections to the beam-former unit to a manageable number while at the same timeenabling realization of better beam profiles with reduced sidelobes. Also, because all the elements are used to transmit andreceive, sensitivity is improved over sparse arrays. This has re-sulted in considerable image quality improvements over sparse2-D arrays. Because they can focus equally in both the lateraldirections (elevation and azimuth), 2-D arrays have the advan-tage of improved elevational resolution over 1-D arrays. Whilefully sampled 2-D arrays offer significantly better image qualitythan the earlier sparse arrays, the azimuth-plane resolution andsensitivity are still inferior to that of conventional 1-D arrays asof today.

    C. Freehand Scanning

    While mechanical 3-D probes and 2-D matrix-array trans-ducers are increasingly used for volume acquisition, freehandtechniques continue to remain a less costly alternative. Insensor-based freehand acquisition, a position sensor is attachedto the conventional 1-D-array probe [Fig. 3(c)], and image andsensor data are acquired simultaneously. The 3-D US volumesare then visualized following geometric registration of arbitrary2-D slices into 3-D Cartesian coordinates and reconstruction.Sensorless acquisition, where registration is performed basedon an assumption of the scan geometry (e.g., linear translation

    at constant speed), is also used when geometric accuracy is notcritical.

    Since no strict restrictions are imposed on the field of view,freehand techniques are advantageous in acquiring large or ir-regular volumes if low volume rates can be tolerated. Further-more, little or no motion of the anatomy should occur duringscanning. Because they can be used with almost any probe, in-cluding high-frequency linear arrays, they are used for 3-D vas-cular, musculoskeletal, or small parts imaging. In addition, theyare being used in several commercial products (e.g., HitachisReal-time Virtual Sonography or GEs Volume Navigation) forinteractive registration of preoperative computed tomography

    (CT) or magnetic resonance imaging (MRI) volumes to intra-operative ultrasound images [53], [54] to aid in interventionalprocedures. For example, in radio-frequency (RF) ablation ofhepatic tumors, preoperative CT images can be used for delin-eating malignant nodules. However, intraoperative US imagescan track tissue deformation as well as visualize RF-ablationtarget area. Therefore, interactive fusion of images from bothmodalities during the surgical interventions can combine thebenefits of both modalities [55].

    Much research in freehand techniques in the last decadefocused on improving image quality by tackling various errorsources in the registration of 2-D scan planes to 3-D volumes.These include the sensor calibration [56], tissue deformation

    during scanning [57], and sensor noise in estimating probeposition/orientation [58]. A number of algorithms have also

  • 7/28/2019 journal in electronics and communication

    5/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 27

    Fig. 4. 3-D processing flow for volume visualization via planar and volume views.

    been proposed for reconstructing 3-D volumes from freehandacquisitions, a detailed review of which can be found in [59].

    Research in more accurate, robust, and easy-to-use freehandtechniques continues. More recent freehand approaches includetracking using less obtrusive microelectromechanical systems(MEMS) sensors [60], optical fiber-based sensors, and hybridapproaches where multiple tracking techniques are utilized(e.g., speckle decorrelation MEMS [61]). Progress made intechniques not requiring an obtrusive sensor is encouraging.However, such techniques need to become more robust andaccurate. Current techniques do not work well when certaintransducer motion conditions are not met or images do notcontain fully developed speckles.

    III. VOLUME VISUALIZATION

    Once an ultrasound volume is acquired using a mechanicalprobe, a 2-D matrix-array transducer, or freehand scanning, theresulting volumes can be visualized in a multitude of ways fol-lowing reconstruction. These can essentially be grouped intotwo categories:

    1) planar views;2) volume views.Planar views are similar to conventional 2-D ultrasound

    views. Even though 3-D data are acquired, 2-D cross-sectionsare displayed to the user. On the other hand, volume viewsintegrate information from an entire volume into a single imageto provide en face views of the underlying 3-D anatomy, i.e.,more like how a surgeon would see during a surgery.

    Todays 3-D US systems support volume visualization viaboth planar and volume views. A 3-D processing flow similar toshown in Fig. 4 is typically used for this purpose, which followsa pipeline that involves acquisition via transducers, front-endbeamforming, and back-end signal and image processing

    (Fig. 5). The first task in 3-D processing is volume reconstruc-tion, which registers and resamples acquired 2-D US slices into

    a 3-D Cartesian volume. If freehand acquisition is used, the in-formation from a position sensor, an assumption about the scangeometry (e.g., linear transducer translating at constant speed),or an estimate of scan plane displacement based on speckledecorrelation techniques is needed in volume reconstruction.The known scan geometry of mechanical probes or 2-D arraytransducers simplifies the volume reconstruction process to 3-Dscan conversion (3-D SC), in which data acquired in the scangeometry of the probe are resampled into a Cartesian volume

    via explicit coordinate transformations (typically from polarcoordinates). In the following sections, we discuss in detail theplanar and volume views along with the various processingcomponents (following volume reconstruction) that are neededto generate such views. An acquisition/visualization techniqueknown as spatiotemporal image correlation (STIC), which cangenerate both planar and volume views, is also discussed.

    A. Planar Views

    In theearlier attempts at 3-D ultrasound, fly through imageswere provided to the user by displaying the original 2-D slices ofacquisition in sequential order. The only way the user could in-teract with the volumes was to go back and forth in sequence andmentally construct a 3-D impression of the underlying anatomy.Measurements (e.g., volume of an organ) could also be made onindividual cross-sections and then estimated. As more powerfulcomputers came along, arbitrary slicing of an acquired volumeto display any cross-sectional plane independent of the originalacquisition direction was made possible.

    Generally, two types of planar views are supported on todays3-D US systems: orthogonal planar reconstruction and parallelplanar reconstruction. In the first one, three planes that are or-thogonal to each other are reconstructed and displayed to theuser at the same time. This is usually denoted as multiplanarreconstruction (MPR). While the three MPR planes remain or-

    thogonal to each other, they can typically be shifted with respectto each other, so they intersect the acquired volume and each

  • 7/28/2019 journal in electronics and communication

    6/17

    28 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    Fig. 5. Overall 3-D US imaging pipeline: acquisition with transducers, front-end beamforming, back-end signal/image processing, and 3-D processing.

    Fig. 6. An example illustrating parallel planar reconstruction of a fetal heart.This mode is also denoted as TUI or MSV on commercial 3-D US machines.(Reproduced with permission from [62].)

    other at different positions. Also, they can all be rotated aroundso that they cross-sect the volume at different angles. The vi-sualization of arcuate uterus (normal-shaped uterus but with anindentation on the superior aspect of the endometrial cavity) inFig. 2 is a good example, demonstrating the use of MPR in gy-necology. In the case of parallel planar reconstruction, multiple(e.g., 9 or 12) sequential cross-sections that are parallel to eachother are reconstructed and presented to the user simultaneously(Fig. 6). This view is similar to the diagnostic views frequentlyused in the interpretation of MRI or CT images and is usuallydenoted as multislice view (MSV) or tomographic ultrasoundimaging (TUI) [62].

    In terms of algorithms, there is not much difference in howthe planar views are reconstructed in MPR, in TUI, or as a singlecross-sectional plane. Once the geometry of a plane is definedvia a plane equation in 3-D space, the input coordinates in theultrasound volume corresponding to each pixel on the planaroutput image are computed (inverse mapping). The output pixelis then computed via an interpolation of the neighboring voxels.

    In general, conventional 2-D images generated by a 1-Dtransducer are superior in image quality to planar imagesreconstructed from a 3-D volume. The anisotropic resolution ofultrasound (different axial, azimuth, and elevation resolutions)has an adverse effect on the image quality of planes resampled

    from volume data in different directions. When simple kernels(e.g., trilinear interpolation) are used during resampling to save

    computation, this could cause additional blur and aliasing.Furthermore, when a plane to be visualized is not orientedalong the direction of insonation (direction in which the ul-trasound beams are transmitted and received), various USartifacts manifest themselves differently than in conventional2-D US imaging, which could make them difficult to recognize

    [43]. For example, a shadowing artifact, which is easily distin-guishable in conventional 2-D US imaging, could be mistakenfor a hypoechoic region in the planar images reconstructedfrom a 3-D volume. Also, spatial compounding and multiplefocal depths, which are image optimization options for 2-Dimaging, are not practical for a mechanical 3-D probe due to itscontinuous motion. Until now, the image quality improvementsin planar views have mainly come from higher quality volumeacquisition provided by better transducers, more sophisticatedUS signal/image processing pipeline, and use of higher orderinterpolation kernels (e.g., cubic interpolation instead of linear)enabled by increased computing power. Also, as US systems

    become more programmable and flexible in signal/image datapaths and more powerful processors with better I/O capabilitiesare used, direct resampling of planar views from pre-SC data(i.e., bypassing the 3-D SC (Fig. 4) is possible, which improvesthe image quality further [63].

    B. Volume Views

    Many improvements in volume views occurred in the lastdecade. In volume views, each pixel value on the output imageis determined by casting a ray from that pixel into a volumebased on a viewing transformation (Fig. 7). The voxel informa-tion encountered along the path of each ray is then integratedinto the pixel value based on several different techniques. When3-D visualization transitioned from an offline postprocessingstep on an external computer to a mode natively supported onthe US machine [64], volume views were initially generatedbased on intensity projection techniques. For example, withmaximum ntensity projection (MIP), the maximum voxel valuealong the ray was used, whereas minimum-intensity projection(mIP) used the minimum voxel value. Alternatively, summationor averaging was performed along a ray to obtain X-ray-likeprojections. While they continue to be used in todays 3-D USsystems in some cases [e.g., to visualize a fetal spine (MIP)],a major advance in the last decade has been the incorporation

    of more realistic, opacity-based volume-rendering techniques.These are based on optical models [65] that delineate surfaces

  • 7/28/2019 journal in electronics and communication

    7/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 29

    Fig. 7. Volume rendering using direct ray casting. Voxels along each ray are resampled via a trilinear interpolation of eight neighboring original voxels.

    and convey depth information better, such as the one shown inthe example of Fig. 1(b) in the visualization of fetal cleft lips.

    Opacity-based volume rendering is discussed in more detailbelow, followed by fast algorithms for interactive rendering,preprocess filtering to improve rendering image quality, andmultivolume rendering techniques to visualize blood flow alongwith tissues.

    1) Opacity-Based Volume Rendering: Levoys paper in1988 [66] established the basis of opacity-based rendering al-gorithms for visualization of 3-D datasets in medical imaging,including ultrasound. Two key operationsvolume classifica-tion and shadingform the basis of his technique to achievehigh-quality renderings. Volume classification is the process ofdetermining the visibility of each voxel based on a decision ofwhether that voxel belongs to an anatomical section of interestto the user (tissue, organ, surface) or not. Volume shading isthe process of assigning colors to voxels based on the ori-entation of surfaces to improve depth perception in the finaloutput image. Following volume classification and shading,the resulting opacities and colors are projected onto a 2-Dimage plane via ray casting. This projection operation usingvoxel opacities/colors is usually denoted in computer graphicsliterature as compositing.

    a) Volume classification: In volume classification, opac-ities are assigned to voxels in a volume dataset. Opacity is anoptical property that indicates what proportion of incominglight each voxel in a volume blocks and what proportion itpasses through. It also determines how much light is scatteredback from each voxel. Most medical volume datasets, includingultrasound, are not real optical data. Therefore, the opacitiesused in volume rendering are not true optical opacities of thedataset to be visualized but features assigned by the user toachieve varying visual effects. This way, structures of interestcan be made more prominent through the assignment of opaci-ties, whereas background structures could be made less visible.The assignment of opacities to voxels is carried out via thedefinition of transfer functions (TFs) that map the scalar voxelvalues to opacity values.

    The design of an optimum opacity TF for a given dataset

    is an important task in achieving high-quality rendering. High-lighting the diagnostically relevant structures while suppressing

    less relevant details inside a volume via a selection of TFs canbe tedious. Because of this, piecewise-linear TFs are commonly

    used to simplify the design of TFs and their adjustments via ahandful of control points. Many 3-D ultrasound systems offerseveral preset TFs that have been optimized by the manufac-turer for different clinical scenarios, and the user can select onefrom the available set. Several approaches have been proposedfor adaptively designing TFs either automatically or semiauto-matically [67]. TF design for US data is more challenging be-cause they lack clear and strong boundaries, except where theacoustic impedance mismatch between two adjacent layers issignificant, as in tissue-fluid interfaces (e.g., fetus versus amni-otic fluid and blood versus vessel walls). This, combined withthe presence of speckle noise even within a homogeneous struc-ture, makes adaptive TF design very difficult.

    Honigmann et al. [68] proposed an algorithm for adaptive de-sign of opacity TFs specifically for ultrasound volumes to high-light tissuefluid boundaries, such as fetusamniotic fluid inter-faces. They mathematically determined that parabolic TFs pro-vide thebest contrast at such interfaces for differentiatingtissueswith dissimilar voxel intensities (tissue contrast) as well as forrecognizing the depth order of similar tissues (similar intensi-ties) at different depths (depth contrast) from the rendered im-ages. They later extendedtheir approach to time-varyingvolumedatasets for 4-D imaging [69]. In both approaches, high-contrastvisualization of tissuefluid interfaces was targeted.

    Design of optimal TFs for the visualization of US data thatdo not possess clear tissuefluid boundaries is an area that hasnot been explored much. Also, while providing better contrastin rendered images is an important and valuable objective, nostudies have reported yet on how to relate TF design to extrac-tion of what is clinically more relevant and what provides morediagnostic information. Therefore, there is certainly room forimprovement in optimal design of TFs for 3-D US data.

    In addition to opacity TFs (OTFs), another type of TF alsoexists in volume rendering. These are called color transfer func-tions (CTFs). Similar to how an OTF assigns opacity to eachvoxel, a CTF assigns color. In other modalities, e.g., CT orMRI, a CTF is used to assign different colors to different tissue

    types to enhance visualization. In ultrasound, such differenti-ation based solely on voxel values is not trivial. Therefore, in

  • 7/28/2019 journal in electronics and communication

    8/17

    30 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    its simplest form, CTFs used in ultrasound have mostly been alinear mapping between ultrasound values (e.g., B-mode mag-nitudes) and a simple colormap (e.g., shades of gray or othercolor); thus, they generally are not implemented explicitly. Nev-ertheless, an advantage of using CTFs in ultrasound is in multi-volume rendering, i.e., when B-mode and color/power Doppler

    volumes are rendered together to visualize blood flow within tis-sues. These techniques are discussed later.Shading: Shading consists of 1) detection of surface orien-

    tations within a volume and 2) color assignment to each voxelbased on an illumination model and the orientation of a surfaceto which the voxel belongs. Detection of surface orientations istypically achieved via volume gradient operation, which returnsa gradient vector at each voxel location indicating the directionof the greatest rate of change in voxel values. Normalizing themagnitude of the gradient vector results in surface normal vec-tors, which are then used in the computation of how the lightwill reflect from each voxel. Once the surface normals are com-puted for all the voxels, color is assigned to each voxel based onan illumination model. Typically, Phong illumination [70] or itsvariations are used.

    2) Fast Volume Rendering Algorithms: The traditional raycasting (direct ray casting) algorithm employed by Levoy [66]uses rays directly projected from pixels on an image plane,which intersect the 3-D object volume at irregular locationsdepending on the viewing angle (Fig. 7). This causes voxelsampling locations, defined at equidistant intervals along eachray, to fall in between the original slices of the volume. There-fore, 3-D interpolation is used, requiring voxels from multiplevolume slices (e.g., trilinear interpolation uses the eight nearestvoxels in the original volume grid; four from the front slice and

    four from the back slice). Also, input voxel accesses are scat-tered across the input volume. Such an incoherent data-accesspattern is computationally inefficient because memory archi-tectures are optimized to provide high throughput for localizedand sequential accesses but suffer from longer latencies in caseof random accesses. Therefore, direct ray casting causes manycache and memory page misses, resulting in long latencies.To overcome the drawback of direct ray casting, various fastalgorithms have been proposed.

    The shear-warp algorithm tries to overcome the high com-putational cost of traditional direct-ray-cast volume renderingby breaking down ray casting into two stages that can be im-

    plemented efficiently [71]. The main idea behind shear-warp isto process the 3-D data slice by slice on the original volumegrid to reduce the computationally expensive 3-D interpolationsto 2-D interpolations and also make data accesses coherent byconfining them to one slice at a time. Slice-by-slice processingis achieved by mathematically factorizing the viewing matrix,which results in two operations: a volume shear component anda warp component. The shear component aligns all the slicessuch that the rays intersect them at right angles, and hence sam-pling locations fall on a regular grid during projection of rays[Fig. 8(a)]. Following projection of each slice onto a 2-D in-termediate projection plane, a 2-D final warp on the projection

    plane corrects for the distortion introduced by volume shear. Amajor drawback of the shear-warp algorithm, however, comes

    Fig. 8. Fast volume rendering using (a) shear-warp versus (b) shear-image-order algorithms. Bilinear interpolation is used within each slice to resampleeach voxel along a ray from the four neighboring original voxels.

    from its own advantage, i.e., confinement of voxel samplinglocations to discrete slice locations. Although this requiresfewer operations, it results in poor sampling and aliasing in

    compositing, particularly for volume data or opacity transferfunctions with high-frequency contents. Also, multiple stagesof resampling (one during compositing and the other in affinewarp) result in loss of sharp details. By upsampling the volumealong the ray-casting (compositing) direction and supersam-pling within each slice, aliasing can be reduced, and loss ofsharp details can be minimized [72], although this comes at acost of increased computation. However, using this technique,the data accesses still remain coherent. Another type of artifactthat occurs with shear-warp is the Venetian-blinds artifact. Asseen in Fig. 8(a), shearing original volume slices on top of eachother results in regions where data exist in one slice for com-

    positing. However, no corresponding data exist for the sameray in the subsequent slices. This leads to parallel stripes ofalternating dark and bright colors, i.e., Venetian-blinds artifact[Fig. 9(b)], on some viewing angles. While upsampling andsupersampling help reduce the appearance of these artifacts,the Venetian-blinds artifact is a fundamental limitation ofshear-warp and is difficult to remove completely.

    To address the aliasing problem due to the confinement ofsampling at slice locations, Engel et al. [73] proposed to pre-compute the opacity and color contributions from each ray seg-ment between discrete slice locations (with an assumption ofa continuous linear variation of voxel values within each seg-

    ment) using a continuous ray-casting integral and store them ina lookup table (LUT) for all possible voxel value pairs at two

  • 7/28/2019 journal in electronics and communication

    9/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 31

    Fig. 9. Volume rendering using (a) direct ray-casting, (b) shear-warp, and (c) shear-image-order algorithms without (upper row) and with (lower row) preintegra-tion. Staircasing artifacts (white arrows) due to aliasing can easily be seen in the shear-warp and shear-image-order rendering without preintegration. This artifactis considerably suppressed in direct ray casting because sampling locations are not restricted to fall onto slice locations. The Venetian-blinds artifacts (black arrow)due to sliding of volume slices on top of each other are easily recognizable in shear-warp rendering. These are more pronounced and manifest themselves over theentire volume in preintegrated shear-warp because slabs (pairs of slices) slide on top of each other instead of individual slices. Preintegrated shear-image-orderavoids both these artifacts and delivers image quality comparable to direct ray casting.

    ends of a ray segment. Schulze et al. [74] later applied preinte-gration to the shear-warp algorithm. Preintegration of color andopacities results in significant reduction of the artifacts fromsampling at discrete slice locations, eliminating the need forupsampling in the compositing direction. Since the additionaloverhead of preintegration lookups is minimal, preintegratedvolume rendering is computationally more efficient thanvolume upsampling in the depth direction. Improvement inimage quality using preintegration over simple Riemann-sumapproximation during compositing can be seen in Fig. 9, whereultrasound volume data were acquired from a fetus phantomusing a Hitachi Hi Vision Prerius system equipped with amechanical 3-D probe and then rendered offline on a PC usingdifferent rendering algorithms. While the aliasing artifacts areeffectively reduced using the preintegrated color and opacityLUTs in shear-warp, other drawbacks of shear-warp (e.g.,Venetian-blinds artifact and blurring due to two stages ofresampling) still remain.

    The shear-image-order algorithm was proposed mainly toovercome drawbacks associated with shear-warp [75]. It resam-ples each slice such that the interpolated voxels are alignedwith the pixels in the final image [Fig. 8(b)]. This eliminatesthe need for the final affine warp in the shear-warp algorithm,thus preserving the sharp details better. However, unlike di-rect ray casting, it also maintains the shear-warps memory ac-cess efficiency by confining the sampling locations to withineach slice. Also, the shear-image order algorithm overcomes theVenetian-blinds artifact of shear-warp because no volume shear,

    i.e., shearing of slices on top of each other, is needed. Instead,each slice undergoes a 2-D shear to correct for the distortion re-

    sulting from confining the sampling locations to original slicelocations. Since an affine warp has to be performed on eachslice, it is computationally more expensive than shear-warp.However, it is still much faster than direct ray casting.

    The aliasing problem due to sampling at discrete slice loca-tions exists in the shear-image-order algorithm, which can bealleviated via upsampling in the compositing direction, albeitat increased computational cost. To address aliasing without asignificant increase in computation, our group proposed a com-bination of shear-image-order and preintegration [76]. Fig. 9 il-lustrates the rendering obtained with each of these algorithms:direct ray casting, preintegrated direct ray casting, shear-warp,preintegrated shear-warp, shear-image-order, and preintegratedshear-image-order. Preintegrated shear-image-order provides afine balance between image quality and required computation.

    3) Filtering: In 3-D US, the quality of volume-rendered im-ages is severely degraded due to the presence of speckle noise.When voxels corrupted by speckle noise are projected onto a2-D image plane during volume rendering, it produces a grainypattern that compromises the clear depiction of anatomy inthe output image. In addition, speckle noise leads to erroneousgradient vector computation, which significantly deviates thesurface normals of the underlying structures. Volume renderingbased on erroneous gradient and noisy data produces poorshading with many dark and bright spots. For this reason,prior to volume rendering, ultrasound volume data are usuallypreprocessed to improve the SNR.

    Several different approaches have been proposed in the liter-

    ature to reduce the undesired effects of speckle noise in volumevisualization. Sakas et al. [77] proposed the use of a 3-D median

  • 7/28/2019 journal in electronics and communication

    10/17

    32 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    filter followed by a 3-D lowpass filter to reduce speckle and im-prove the quality of rendered images. A 3 3 3 or 5 5 5median filter followed by a 3 3 3 low-pass filter producedthe best results. Kim and Park [78] proposed to replace the 3-Dmedian filter with a 2-D truncated median filter instead, whichwas applied on the individual 2-D slices in a volume before the

    application of a 3 3 3 low-pass filter. A 2-D truncated me-dian filter was more efficient to compute than a 3-D median filterand provided an efficient means to approximate a mode filter,which was proposed as a maximum likelihood estimator for re-moving speckle in ultrasound images [79].

    Shamdasani et al. [80] proposed a dual-filter approach thatemploys two different and independently controlled filters: a3-D filter before compositing and a 3-D filter before gradientcomputation. Filtering the volume data before gradient compu-tation with a moderate-size kernel (e.g., 7 7 7) can suppressspeckle noise effectively. This enables better estimation of sur-face normals for a smoother shading effect that enhances 3-Dperception. At the same time, filtering the volume data beforecompositing using a small kernel (e.g., 3 3 3) achieves areasonable reduction in noise and improvement in image qualitywithout excessively smoothing out the details. A major draw-back of a dual-filter approach, however, is the increased com-putation cost because the volume needs to be filtered twice.

    Shamdasani et al. [80] also suggested the use of a 3-D boxcarfilter in place of a 3-D low-pass filter (e.g., Gaussian) to reducethe computation time. Although Gaussian filters offer better fre-quency response characteristics, the differences are barely no-ticeable when small kernels (e.g., 3 3 3 or 5 5 5) areused. On the other hand, boxcar filters are attractive from a com-putational standpoint in that no multiplications are required and

    they can be sped up significantly (and more or less independentof the kernel size) by moving average techniques [81]. Theseproperties make them attractive when interactive performanceis important, e.g., in interactive 3-D ultrasound acquisition andvisualization.

    A number of groups have also applied anisotropic diffusionfiltering to preprocess 3-D US data before volume rendering[82][86]. Currently, the computational cost of 3-D anisotropicfilters is high for their use in fast implementations on 3-D/4-Dsystems, while their 2-D counterparts are already being used onsome systems for speckle reduction in 2-D imaging. Since most3-D US pipelines rely on the output of 2-D US pipelines to gen-

    erate the individual slices comprising a 3-D volume, it should beexpected that, on some systems, each slice can readily be pre-processed using such filters. As the computational speed furtherimproves, it may be possible to implement the 3-D versions ifthe improvement obtained from using 3-D filters instead of 2-Dfilters is significant.

    4) Multivolume Visualization: Earlier attempts at visualizingblood flow using 3-D US focused on rendering power/velocitydata separately from B-mode data using grayscale volumerendering after B-mode and Doppler volumes are combinedtogether into one volume, where Doppler and B-mode dataare both represented by grayscale values [87][89]. However,

    a clear distinction between tissue and blood does not existin such an approach, except for the different shades of gray

    used (e.g., brighter values for blood flow and darker values forB-mode). However, many researchers have shown that clearvisualization of anatomical relationships along with blood flowprovides valuable clinical information for better diagnosis. Forexample, Ohishi et al. [90] showed that visualizing anatomicalrelationships between a tumor and the feeding arteries within

    or around the tumor is valuable for determining its malignancyin liver and kidney. More recent studies also found that 3-DDoppler ultrasound can be used to differentiate malignantand benign tumors in breast with its ability to quantify tumorblood flow and visualize vascular morphology [91][93].Three-dimensional power/color Doppler data are typicallydisplayed with 3-D B-mode data to provide their anatomicalrelationship, and such visualization is enabled by multivolumerendering techniques. Almost all commercially available 3-DUS systems today support simultaneous visualization of bloodflow and tissues, where grayscale is used to represent tissuesand hue colors are used to represent vasculature (similar to theconventions used in 2-D color Doppler display).

    Various multivolume visualization techniques have beenproposed for merging volumes from different modalities (e.g.,CT and positron emission tomography) [94][96] and in insitu visualization of the merged data. In ultrasound, multi-volume rendering is required to merge data from differentmodes, e.g., B-mode with color Doppler. Several approacheshave been proposed specifically for ultrasound multivolumevisualization [97][100]. In post fusion, the rendering pipelineis duplicated. Two volumes (B-mode and power) are renderedseparately, and the resulting 2-D images are combined laterat a postprocessing stage where RGB values are assigned topower/velocity data to differentiate from the B-mode image

    [98]. While it is attractive for its simplicity because it does notrequire any modification to the rendering pipeline (except theduplication and postprocessing), post fusion suffers from depthambiguity between tissues versus vasculature since blendingof two volumes occurs after their independent renderings[100], [101]. This can be overcome by composite fusion[97],where tissues and flow data are colored separately and RGBrendering is performed together. In RGB rendering, eachcolor channel is rendered independent of each other. Becausetissue and flow data from different depths are compositedtogether, color shifts may result, affecting each of the RGBchannels differently. This may cause deviation of colors from

    the originally intended colors to represent flow. Progressivefusion[100] can overcome this, where the flow and tissue infor-mation are composited separately but with opacities updatedat each slice location jointly to reflect correct depth order. Afinal fusion stage combines the rendered results from tissueand flow. Unlike composite fusion, which requires decisionof relative weights of tissue and flow during compositing,

    progressive fusion allows separate control of weights duringthe final blending stage, giving users the ability to easilyemphasize flow versus tissues. Fig. 10 gives an example ofthese different approaches, where the data were acquired usinga 3-D mechanical probe on a Hitachi Hi Vision 8500 system

    and then rendered offline on a PC using the various algorithmsdiscussed.

  • 7/28/2019 journal in electronics and communication

    11/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 33

    Fig. 10. Illustration of different multivolume rendering techniques: (a) post fusion, (b) composite fusion, and (c) progressive fusion. Different opacity TFs forB-mode data are used for generating the upper (low opacity) and lower (high opacity) row of images.

    In all these rendering approaches, opacity plays an impor-tant role in depicting the spatial relationship between vascu-lature and tissues structures [100]. Images obtained using lowand high opacities for B-mode data from the same volume areshown in the upper and lower images in Fig. 10, respectively.From these images, it appears that more transparent opacities forB-mode and opaque opacities for color/power data work well todifferentiate the relevant information from tissues and flow invisualizing the vasculature. Another approach that achieves thiswas proposed by Petersch et al. [99], called silhouette rendering

    (SR). In SR, both vasculature andtissues arerendered.However,vasculature is emphasized by selectively assigning less weightto tissues that lie in an orientation to occlude the clear visibilityof vessels, resulting in more glass-like rendering of tissues inthose regions and solid renderings in other areas.

    Several obstacles remain to be overcome in 3-D blood flowvisualization. For example, how to best quantify a Dopplervolume and generate reliable indices from it is still an ongoinginvestigation [102]. How to represent the flow direction incolor 3-D is also a challenge that has not been addressed yet.In 2-D color Doppler, the convention used to represent flowtowards and away from the transducer using red and bluecolors, respectively, does not hold any more as the user rotatesthe volume. A clutter filter optimized to suppress clutter andcolor/power Doppler artifacts (e.g., flash artifact) in 2-D maynot be optimum anymore when applied to a 3-D volume. Forexample, while an occasionally generated flash artifact may notbe detrimental in 2-D imaging, a similar artifact not suppressedproperly in 3-D imaging could render an entire volume unusable[103]. Also, compared to 3-D B-mode imaging, 3-D Dopplerimaging suffers more severely from low volume rates becauseof the need of ensemble data acquisition, which necessitatesgated acquisition in more cases than in 3-D B-mode imaging.

    C. Spatio-Temporal Image Correlation

    A fetal heart beats at about twice the rate of adult heart (i.e.,120160 bpm), making it impossible to visualize its fine struc-

    tures due to motion blur when a mechanical 3-D probe is used.While gated-acquisition techniques based on an electrocardio-gram (ECG) signal have been used to minimize motion arti-facts in acquiring 3-D volume datasets from adult hearts, fetalECG is difficult to obtain because of the interference from thestronger maternal ECG, and the placement of ECG electrodescould restrict the movement of the US probe. Therefore, non-ECG-based gating would be needed. STIC is an image-basedgating method supported by several manufacturers to overcomethe low volume rate limitation of the mechanical 3-D probes andenable fetal echocardiography [104][106].

    STIC achieves this via performing a very slow sweep (e.g.,0.003 /ms) over the duration of 1525 heartbeats covering thefetal heart, while the 1-D array transducer inside the mechanicalprobe is continuously acquiring 2-D slices at a high frame rate(e.g., 100150 frames/s). At the end of the sweep, this results ina long sequence of 2-D slices (e.g., 15002500 slices) denselysampling the beating fetal heart in both space and time. In a post-processing step, each heartbeat is determined based on image-similarity metrics (e.g., autocorrelation [107] and cross-correla-tion [108]), and the slices are partitioned into distinct volumesrepresenting different phases of the cardiac cycle based on theiracquisition times relative to a detected heartbeat. Following the

    partitioning, a series of volumes representing different phasesof the heart during one heart cycle are obtained and displayed.STIC maybe performed in conjunction with B-modeor color/

    power Doppler acquisition [109], [110]. It could be used in thedepiction of the main heart structures as well as the great ves-sels for the diagnosis of congenital heart disease. While fastacquisition and visualization using a 2-D matrix-array trans-ducer is preferable to postprocessing, early studies reported thatSTIC image quality surpasses the image quality obtained viamatrix-array transducers from fetal hearts [111]. This may beattributed to better in-plane image quality of the mechanical3-D probes. Also, STIC can deliver higher effective temporalresolution than 2-D arrays because the temporal resolution of a

    reconstructed volume is mainly determined by the 2-D rate ofacquisition of the mechanical probes transducer array. For ex-

  • 7/28/2019 journal in electronics and communication

    12/17

    34 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    ample, assuming fetal heart rate of 150 bpm and 150 frames/sacquisition by the transducer, there are 60 distinct frames thatcorrespond to each of the scan planes comprising a 3-D volume.This allows reconstruction of volumes for 60 different phases ofthe heart, leading to an effective temporal resolution of 60 vol-umes per cardiac cycle. In contrast, a 2-D array acquiring 25volumes/s can deliver a temporal resolution of ten volumes percardiac cycle.

    Three-dimensional ultrasound could play an important rolein determining congenital anomaly of the fetal heart. While ac-quisition with 2-D arrays has been attempted, their use in fetalechocardiography is limited. Mechanical 3-D probes and STICon the other hand are available on more US machines. SinceSTIC acquisition is long (e.g., 10 to 15 s), any fetal movement ormaternal motion during the acquisition could affect the qualityof the data acquired. Also, while STIC can compensate for somevariability in the heart rate (e.g., 10%), arrhythmias beyondthis range are relatively common in fetuses [112]. In such cases,the STIC algorithm may not work correctly. Thus, in spite of its

    promises, STIC has its own limitations. Therefore, fully sam-pled 2-D arrays with high volume rates might be a better optionin the near future as they become more widely available.

    IV. COMPUTATIONAL APPROACHES

    Because of the real-time nature of US systems and largeamount of computation and data movements that need to be per-formed [113], hardwired designs based on application-specificintegrated circuits (ASICs) were widely used in US systems. Ina hardwired system, each US task (e.g., beamforming, B-mode,color Doppler, and scan conversion) is handled by one or morehardwired boards dedicated to perform that particular task. This

    has limited the flexibility to add new functionalities, introducenew applications, or modify the existing algorithms. One ofthe important paradigm shifts in US system design that startedin the mid-1990s and has been continuing since then is thatincreasingly more functionalities of an US machine are beingimplemented in software using programmable processors. Thedevelopment of 3-D US processing in the last decade wasaccelerated by this paradigm shift.

    At first, programmable postprocessing boards were installedwithin hardwired US systems to facilitate prototyping and de-ployment of new algorithms and applications. For example, aprogrammable ultrasound image processor (PUIP) [64], whichwas based on two TMS320C80 MVP processors, was includedwith the Siemens Elegra machine and enabled the developmentand deployment of panaromic imaging and one of the first na-tive 3-D US modes commercially available, including 3scape.As more powerful processors came along, it became possibleto support US back-end processing tasks [114], [115] as wellas beamforming for low-end systems [116], [117] in software.This enabled quick development and integration of new algo-rithms or applications via software updates. If new functionalityto be added demanded more computational resources than wereavailable on the system, additional boards had to be incorpo-rated. This is the trend 3-D US has followed.

    At the beginning, reconstruction and visualization were per-

    formed on a separate computer outside the US system [118], butwhen 3-D US became a native mode inside the US machine, it

    was supported by additional hardware boards (e.g., PUIP), sep-arate from the main US engine. While 3-D processing and vi-sualization were handled by the separate hardware, front-endbeamforming and back-end signal and image processing wereperformed on the main US engine. With the increases in thecomputational power of PCs embedded within US machines,

    some recent systems also started to utilize the host PC to per-form 3-D processing (e.g., volume reconstruction/3-D SC, pre-process filtering, and other tasks shown in Fig. 4), sometimeswith the help of additional accelerator boards for rendering.

    The approaches used for interactive volume renderingvary but can generally be grouped into three main cate-gories: approaches based on specialized volume-renderinghardware, software-based approaches, and texture-mappinghardware-based approaches. The custom-designed hardwareapproach specifically for volume rendering can typically de-liver the highest performance. Much research effort went intodeveloping these in the 1990s, and many prototypes were built[119]. A volume rendering accelerator board based on a single

    ASIC chip, VolumePro (TeraRecon, Inc., San Mateo, CA), wascommercially developed to enable interactive visualization oflarge scientific and medical data on PCs. While many of the ear-lier attempts at designing volume-rendering hardware relied ondirect ray casting accelerated through many parallel computingengines and special memory architectures, the VolumeProboard (VolumePro 500) was based on the shear-warp algorithm[120]. It was eventually replaced with a second-generationarchitecture (VolumePro 1000) based on the shear-image-orderalgorithm [75] and a third-generation architecture (VolumePro2000) based on direct ray casting. The custom-designed volumerendering boards, even though they provide high performance

    and image quality, are costly and do not offer much flexibilityin incorporating new visualization techniques.Texture-mapping hardware-based solutions rely on the tex-

    ture-mapping functionality of the surface-rendering hardware,i.e., graphics processing units (GPUs). Use of GPUs is at-tractive because they are already available in many PCs, lesscostly than custom volume-rendering hardware, and handlemany operations concurrently, such as bilinear or trilinear in-terpolations and shading calculations. They interface with fastmemories and achieve a high throughput via deep pipelines andmany pixel-shaders running in parallel on multiple independentpixels. While these architectures are optimized for surfacerendering, volume rendering could be supported by defining aseries of planes in the viewing direction parallel to the imageprojection plane, mapping them to 3-D texture that representsvoxel colors and opacities, and compositing them onto theframe buffer [121], [122].

    Lacroutes design of the shear-warp algorithm was originallyintended for software-based rendering with multiple processors[123]. The main advantage of a software-based approach is thatit provides flexibility in adapting the rendering algorithm ac-cording to the needs of the underlying application. Although,software-based approaches initially could not deliver similarperformance to that from hardware-based approaches, recentprogrammable media processors, such as IBM Cell Broadband

    Engine (Cell BE), and general-purpose processors with multi-media instruction set extensions, such as the Intel Core 2 family

  • 7/28/2019 journal in electronics and communication

    13/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 35

    with MMX/SSE instruction setextensions,candeliver good ren-dering performance. For example, Cell BE is a multicore archi-tecture, with eight synergistic processing elements (SPEs) thatare optimized for multimedia processing. A 256 256 2568-bit volume can be rendered at 20 volumes/s using a software-based implementation of shear-warp volume rendering using

    only one of the eight SPEs [124], leaving seven other SPEs tohandle other tasks (e.g., filtering and back-end signal and imageprocessing).

    An important and often very compute-intensive module in3-D processing is volume reconstruction or 3-D scan con-version. Similar to scan conversion in conventional 2-D USimaging, 3-D SC performs conversion of scanline data fromacquisition coordinates (e.g., polar) to a Cartesian volume to beused in rendering and display. This 3-D SC can be implementedeither directly or by separating it into two 2-D SC passes [125].While a direct implementation of 3-D SC avoids an extra stageof resampling, its computational cost is high because of themany square-root and arctangent operations that are needed

    in address computation. An LUT approach is sometimes usedto speed up software-based implementations of 2-D SC byprecomputing input addresses and interpolation coefficientsrequired for each output pixel. However, the required LUT sizeis prohibitively large in the case of 3-D SC. Separating the 3-Dscan conversion into two 2-D SC passes, i.e., a series of 2-DSCs in the azimuth plane followed by a series of 2-D SCs inthe elevation plane, overcomes this challenge because 2-D SCsin each pass share the same address offsets and interpolationcoefficients, and thus the same small 2-D LUTs can be used.A separable approach, however, could result in lowered imagequality due to two stages of resampling. However, the loss in

    image quality can be alleviated considerably by carefully se-lecting the SC parameters in both passes [125]. If the 2-D scanconverter from 2-D US back-end is used, the 3-D processingengine only has to perform SC in the elevation direction.

    V. DISCUSSION AND CONCLUSION

    The introduction and continued improvements in mechanicaland 2-D matrix-array transducers in the last decade have playeda key role in facilitating the 3-D ultrasound and its clinical ap-plications. While mechanical 3-D probes are relatively slowerto acquire volumes, they still provide better image quality(i.e., in-plane resolution, less sidelobe artifacts, and highersensitivity) and cost less than 2-D arrays. On the other hand,2-D arrays are still relatively new and continue to evolve, andfurther improvements in image quality could be expected asthe technology advances. The advantages of 2-D arrays are thehigh volume rates and equal focusing in both azimuth and ele-vation directions. In the foreseeable future, it could be expectedboth mechanical probes and 2-D arrays will be supported onmore US systems. With the exception of cardiac applications,mechanical probes are likely to be used more than 2-D arraysinitially, but this could change as and if the cost of 2-D arrayscomes down.

    The main difficulty in manufacturing the 2-D arrays comefrom the need to interconnect and wire each element indi-

    vidually. The elements in 2-D arrays are significantly higherin number and smaller in size than 1-D arrays, which makes

    interconnections even more challenging. Furthermore, to sim-plify 3-D beamforming, part of the beamforming circuitry istypically placed and integrated within the transducer, whichfurther increases the complexity and cost. A different transducerconstruction technology, capacitive micromachined ultrasonictransducers (cMUTs) [126], has been proposed in manufac-

    turing 2-D arrays because of several attractive features theypromise [127]. Unlike the traditional lead-zirconate-titanate(PZT) arrays, which have to be individually wired, cMUTs canbe produced using the standard semiconductor manufacturingprocess, improving the yield and making arrays with muchsmaller and higher number of elements feasible. The fact thatthey can be integrated with the electronics [128] makes themfavorable in 2-D arrays, where integrated circuit boards arecurrently used on the transducer head to perform part of thebeamforming and reduce wiring complexity. cMUT arraysalso provide higher bandwidth than PZT arrays, which makesthem particularly attractive for harmonic imaging and betteraxial resolution. cMUTs have lower sensitivity and penetration

    depth than PZT arrays, which needs to be overcome beforetheir widespread adoption. Another transducer technologyunder research is piezeoelectric micromachined ultrasonicarrays (pMUTs) [129], which are based on thin PZT filmsbut micromachined similar to cMUTs. These so far have beeninvestigated by a limited number of groups and have yet to beextensively evaluated.

    Three-dimensional US systems are going to benefit from theavailability of powerful multicore programmable processors.Some examples are IBMs Cell Broadband Engine, Intels Coreprocessor family, and AMDs Phenom II and Opteron families.They can deliver the high performance of a multiprocessor

    implementation without incurring the overhead of discrete pro-cessor chips having to communicate over external interfaces.Since multiple cores on a chip can typically share the sameresources, e.g., memory bus and on-chip memories/caches,each core can access the same data content without having toduplicate or forward data between separate processors, whichimproves the implementation efficiency further. Current-gener-ation multicore chips typically have up to eight cores. However,more cores, e.g., 32, are expected in the future (e.g., IntelsLarrabee processor is expected to have 16 cores).

    Because 3-D visualization is typically implemented cur-rently on a separate 3-D processing engine than the main USprocessing engine, US back-end signal/image processing and3-D visualization tasks have distinct boundaries. However, asmore tasks can be implemented on the same computationallypowerful new-generation processors, more flexibility couldbe allowed, and the boundary between 3-D processing andconventional back-end processing would become blurry. Forexample, 3-D processing could have access to ultrasounddata at much earlier stages without having to rely on imagesoptimized for 2-D display. This could change the way 2-DUS signal processing is performed when the data acquired areintended for 3-D imaging. For example, separate filtering stagesin 2-D and 3-D processing could be combined into one filteringstage optimized specifically for 3-D visualization. Similarly,

    beamforming and back-end signal/image processing operationscan be performed only for those voxels that are to be rendered

  • 7/28/2019 journal in electronics and communication

    14/17

    36 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    in the final image [130]. This could lead to substantial savingsin computation because it avoids redundant processing of datathat will not be displayed to the user.

    In this paper, we have summarized the main technologicaldevelopments of the last decade that had a major impact on thecurrent 3-D US systems. Compared to the status in the 1980s

    and 1990s, todays 3-D US systems support much easier acqui-sition and deliver higher image quality. However, several issuesstill remain. The image quality of 3-D US imaging is still in-ferior to that of conventional 2-D imaging. While 3-D US canoffer unique views of the underlying anatomy, it is not clear howviews that are diagnostically more relevant can be quickly ex-tracted from an acquired volume for the clinician to evaluate ina timely fashion. Because of the amount of data acquired andpresented to the user at the same time, it would become difficultto manually optimize the viewing parameters and also performmeasurements quickly. Therefore, more automation in these isdesirable and needs to be developed in the future. SiemensSyngo software, which automatically extracts the fetal features

    from an acquired volume, is one example of how the automa-tion would help the clinicians in increasing the throughput orimproving their workflow [131].

    While display technologies to enhance 3-D perception, suchas stereoscopic displays, were considered in the developmentof 3-D US, they have not gained much momentum due todifficulties involved with support and use of such displays inclinical settings. In recent years, stereoscopic display technolo-gies, such as wearable viewers or specialized screens, haveadvanced considerably and are being increasingly adopted forvideo gaming and augmented reality applications. It remains tobe seen whether 3-D US would benefit from their widespread

    availability.In the foreseeable future, we anticipate the 3-D US beingused in conjunction with 2-D imaging, i.e., as a clinical toolthat augments the conventional 2-D US imaging in further con-firmation of certain diagnoses or for workflow improvements.Wider clinical acceptance and standalone use of 3-D US willcertainly require more clinical studies showing the efficacy ofthis modality against 2-D routines. Thecurrent high costs of 3-Dsystems likely hinder wider availability of 3-D US. If clinicalutility is substantiated, we could expectmore machines with 3-Doptions, and cost could become a secondary issue. Nevertheless,reduced costs along with continued improvements in transducermanufacturing technology, computer performance, and ease ofuse will certainly facilitate further adoption.

    REFERENCES

    [1] H. F. Stegall, R. F. Rushmer, and D. W. Baker, A transcutaneous ul-trasonic blood-velocity meter, J. Appl. Physiol., vol. 21, pp. 707711,1966.

    [2] D. W. Baker, Pulsed ultrasonic Doppler blood-flow sensing, IEEETrans. Sonics Ultrason., vol. SU-17, pp. 170184, 1970.

    [3] R. C. Eggleton and K. W. Johnston, Real time mechanical scanningsystem compared with array techniques, in Proc. IEEE Ultrason.Symp., 1974, pp. 1618.

    [4] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, Real-time two-dimensional blood flow imaging using an autocorrelation technique,

    IEEE Trans. Sonics Ultrason., vol. SU-32, pp. 458464, 1985.[5] K. E. Thomenius, Evolution of ultrasound beamformers, in Proc.

    IEEE Ultrason. Symp., 1996, vol. 2, pp. 16151622.

    [6] D. H. Howry, G. Posakony, C. R. Cushman, and J. H. Holmes, Three-dimensional and stereoscopic observation of body structures by ultra-sound, J. Appl. Physiol., vol. 9, pp. 304306, 1956.

    [7] J. F. Brinkley, W. E. Moritz, and D. W. Baker, Ultrasonic three-di-mensional imaging and volume from a series of arbitrary sector scans,Ultrasound Med. Biol., vol. 4, pp. 317327, 1978.

    [8] F. H. Raab, E. B. Blood, T. O. Steiner, and H. R. Jones, Magnetic po-sition and orientation tracking system, IEEE Trans. Aerosp. Electron.

    Syst., vol. AES-15, pp. 709718, 1979.[9] P. H. Mills and H. Fuchs, 3-D ultrasound display using opticaltracking, in Proc. 1st Conf. Vis. Biomed. Comput., 1990, pp. 490497.

    [10] P. E. Nikravesh, D. J. Skorton, K. B. Chandran, Y. M. Attarwala, N.Pandian, and R. E. Kerber, Computerized three-dimensional finite el-ement reconstruction of the left ventricle from cross-sectional echocar-diograms, Ultrason. Imag., vol. 6, pp. 4859, 1984.

    [11] J.-F. Chen, J. B. Fowlkes, P. L. Carson, and J. M. Rubin, Determi-nation of scan-plane motion using speckle decorrelation: Theoreticalconsiderations and initial test, Int. J. Imag. Syst. Technol., vol. 8, pp.3844, 1997.

    [12] H. Brandl, A. Gritzky, and M. Haizinger, 3D ultrasound: A dedicatedsystem, Eur. Radiol., vol. 9, pp. S331S333, 1999.

    [13] S. W. Smith, H. G. Pavy Jr., and O. T. von Ramm, High-speed ul-trasound volumetric imaging system. I. Transducer design and beamsteering, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 38,pp. 100108, 1991.

    [14] L.D. Platt, Three-dimensionalultrasound,2000,UltrasoundObstetr.Gynecol., vol. 16, pp. 295298, 2000.

    [15] J. M. Faure, G. Captier, M. Baumler, and P. Boulot, Sonographic as-sessment of normal fetal palate using three-dimensional imaging: Anew technique, Ultrasound Obstetr. Gynecol., vol. 29, pp. 159165,2007.

    [16] G. Pilu and M. Segata, A novel technique for visualization of thenormal and cleft fetal secondary palate: Angled insonation and three-dimensional ultrasound, Ultrasound Obstetr. Gynecol., vol. 29, pp.166169, 2007.

    [17] T. Esser, P. Rogalla, N. Sarioglu, and K. D. Kalache, Three-dimen-sional ultrasonographic demonstration of agenesis of the 12th rib ina fetus with trisomy 21, Ultrasound Obstetr. Gynecol., vol. 27, pp.714715, 2006.

    [18] R. Hershkovitz, Prenatal diagnosis of isolated abnormal number ofribs, Ultrasound Obstetr. Gynecol., vol. 32, pp. 506509, 2008.

    [19] D. Jurkovic, A. Geipel, K. Gruboeck, E. Jauniaux, M. Natucci, and S.Campbell, Three-dimensionalultrasoundfor the assessmentof uterineanatomy and detection of congenital anomalies: A comparison withhysterosalpingography and two-dimensional sonography, UltrasoundObstetr. Gynecol., vol. 5, pp. 233237, 1995.

    [20] F. Raga, F. Bonilla-Musoles, J.Blanes, andN. G.Osborne, CongenitalMullerian anomalies: Diagnostic accuracy of three-dimensional ultra-sound, Fertil. Steril., vol. 65, pp. 523528, 1996.

    [21] T. Ghi, P. Casadio, M. Kuleva, A. M. Perrone, L. Savelli, S. Giunchi,M. C. Meriggiola, G. Gubbini, G. Pilu, C. Pelusi, and G. Pelusi, Ac-curacy of three-dimensional ultrasound in diagnosis and classificationof congenital uterine anomalies, Fertil. Steril., vol. 92, pp. 808813,2008.

    [22] L. Sugeng, P. Coon, L. Weinert, N. Jolly, G. Lammertin, J. E. Bednarz,K. Thiele, and R. M. Lang, Use of real-time 3-dimensional transtho-racic echocardiography in the evaluation of mitral valve disease, J.

    Amer. Soc. Echocardiogr., vol. 19, pp. 413421, 2006.

    [23] M. Takeuchi and R. M. Lang, Three-dimensional stress testing: Volu-metric acquisitions, Cardiol. Clin., vol. 25, pp. 267272, 2007.[24] R. Chaoui and K. S. Heling, New developments in fetal heart scan-

    ning: Three- and four-dimensional fetal echocardiography, Semin.Fetal Neonatal Med., vol. 10, pp. 567577, 2005.

    [25] L. F. Goncalves, W. Lee, J. Espinoza, and R. Romero, Three- and4-dimensional ultrasound in obstetric practice: Does it help?, J. Ul-trasound Med., vol. 24, pp. 15991624, 2005.

    [26] G. Valocik, O. Kamp, and C. A. Visser, Three-dimensional echocar-diography in mitral valve disease, Eur. J. Echocardiogr., vol. 6, pp.443454, 2005.

    [27] M. X.Xie,X. F.Wang, T.O. Cheng, Q.Lu, L.Yuan, and X.Liu,Real-time 3-dimensional echocardiography: A review of the development ofthe technology and its clinical application, Prog. Cardiovasc. Dis.,vol.48, pp. 209225, 2005.

    [28] L. F. Goncalves, W. Lee, J. Espinoza, and R. Romero, Examination ofthe fetal heart by four-dimensional (4-D) ultrasound with spatio-tem-poral image correlation (STIC), Ultrasound Obstetr. Gynecol., vol.27, pp. 336348, 2006.

  • 7/28/2019 journal in electronics and communication

    15/17

    KARADAYI et al.: THREE-DIMENSIONAL ULTRASOUND 37

    [29] R. C. Houck, J. E. Cooke, and E. A. Gill, Live 3D echocardiog-raphy: A replacement for traditional 2D echocardiography?, Amer. J.

    Roentgenol., vol. 187, pp. 10921106, 2006.[30] R. M. Lang, V. Mor-Avi, L. Sugeng, P. S. Nieman, and D. J. Sahn,

    Three-dimensional echocardiography: The benefits of the additionaldimension, J. Amer. Coll. Cardiol., vol. 48, pp. 20532069, 2006.

    [31] G. R. Marx and X. Su, Three-dimensional echocardiography in con-genital heart disease, Cardiol. Clin., vol. 25, pp. 357365, 2007.

    [32] V. Mor-Avi and R. M. Lang, Three-dimensional echocardiographicevaluation of myocardial perfusion, Cardiol. Clin., vol. 25, pp.273282, 2007.

    [33] B. Tutschek and D. J. Sahn, Three-dimensional echocardiography forstudies of the fetal heart: Present status and future perspectives, Car-diol. Clin., vol. 25, pp. 341355, 2007.

    [34] D. V. Valsky and S. Yagel, Three-dimensional transperineal ultra-sonography of the pelvic floor: Improving visualization for new clin-ical applicationsand betterfunctionalassessment,J. UltrasoundMed.,vol. 26, pp. 13731387, 2007.

    [35] L. Coyne, K. Jayaprakasan, and N. Raine-Fenning, 3D ultrasound ingynecology and reproductive medicine, in Womens Health, London,U.K., 2008, vol. 4, pp. 501516.

    [36] H. A. G. Filho, L. L. da Costa, E. Araujo, Jr., L. M. Nardozza, P.M. Nowak, A. F. Moron, R. Mattar, and C. R. Pires, Placenta:Angiogenesis and vascular assessment through three-dimensionalpower Doppler ultrasonography, Arch. Gynecol. Obstetr., vol. 277,

    pp. 195200, 2008.[37] V. Mor-Avi, L. Sugeng, and R. M. Lang, Three-dimensional adult

    echocardiography: Where the hidden dimension helps, Curr. Cardiol.Rep., vol. 10, pp. 218225, 2008.

    [38] V. Mor-Avi, L. Sugeng, and R. M. Lang, Real-time 3-dimensionalechocardiography: An integral component of the routine echocardio-graphic examination in adult patients?, Circulation, vol. 119, pp.314329, 2009.

    [39] J. Solis, M. Sitges, R. A. Levine, and J. Hung, Three-dimensionalechocardiography. New possibilities in mitral valve assessment, Rev.

    Esp. Cardiol., vol. 62, pp. 188198, 2009.[40] A. Fenster and D. B. Downey, 3-D ultrasound imaging: A review,

    IEEE Eng. Med. Biol. Mag., vol. 15, pp. 4151, 1996.[41] T. R. Nelson and D. H. Pretorius, Three-dimensional ultrasound

    imaging, Ultrasound Med. Biol., vol. 24, pp. 12431270, 1998.[42] G. York and Y. Kim, Ultrasound processing and computing: Review

    and future directions, Annu. Rev. Biomed. Eng., vol. 1, pp. 559588,1999.

    [43] T. R. Nelson, D. H. Pretorius, A. Hull, M. Riccabona, M. S. Sklansky,and G. James,Sources and impact of artifactson clinical three-dimen-sional ultrasound imaging, Ultrasound Obstetr. Gynecol., vol. 16, pp.374383, 2000.

    [44] A. Fenster, D. B. Downey, and H. N. Cardinal, Three-dimensionalultrasound imaging, Phys. Med. Biol., vol. 46, pp. R6799, 2001.

    [45] E. Merz, F. Bahlmann, and G. Weber, Volume scanning in the evalu-ation of fetal malformations: A new dimension in prenatal diagnosis,Ultrasound Obstetr. Gynecol., vol. 5, pp. 222227, 1995.

    [46] O. T. von Ramm, S. W. Smith, and H. G. Pavy Jr., High-speed ul-trasound volumetric imaging system. II. Parallel processing and imagedisplay, IEEE Trans. Ultrason., Ferroelectr., Freq. Control, vol. 38,pp. 109115, 1991.

    [47] T. Bjastad, S. A. Aase, and H. Torp, The impact of aberration on highframe rate cardiac B-mode imaging, IEEE Trans. Ultrason., Ferro-

    electr., Freq. Control, vol. 54, pp. 3241, 2007.[48] S. W. Smith, W. Lee, E. D. Light, J. T. Yen, P. Wolf, and S. Idriss,Two dimensional arrays for 3-D ultrasound imaging, in Proc. IEEEUltrason. Symp., 2002, vol. 2, pp. 15451553.

    [49] Y. Hosono and Y. Yamashita, Piezoelectric ceramics with high di-electric constants for ultrasonic medical transducers, IEEE Trans. Ul-trason., Ferroelectr,. Freq. Control, vol. 52, pp. 18231828, 2005.

    [50] T. L. Szabo and P. A. Lewin, Piezoelectric materials for imaging, JUltrasound Med., vol. 26, pp. 283288, 2007.

    [51] D. H. Turnbulland F. S. Foster, Beam steering with pulsedtwo-dimen-sional transducer arrays, IEEE Trans. Ultrason., Ferroelectr., Freq.Control, vol. 38, pp. 320333, 1991.

    [52] B. Savord and R. Solomon, Fully sampled matrix transducer for realtime 3D ultrasonic imaging, in Proc. IEEEUltrason. Symp., 2003,vol.1, pp. 945953.

    [53] N. Pagoulatos, W. S. Edwards, D. R. Haynor, and Y. Kim, Interactive3D registration of ultrasound and magnetic resonance images based ona magnetic position sensor, IEEE Trans. Inf. Technol. Biomed., vol. 3,pp. 278288, 1999.

    [54] J. H. Kaspersen, E. Sjlie, J. Wesche, J. sland, J. Lundbom, A.degrd, F. Lindseth, and T. A. N. Hernes, Three-dimensionalultrasound-based navigation combined with preoperative CT duringabdominal interventions: A feasibility study, Cardiovasc. Intervent.

    Radiol., vol. 26, pp. 347356, 2003.[55] H. Kawasoe, Y. Eguchi, T. Mizuta, T. Yasutake, I. Ozaki, T. Shimon-

    ishi, K. Miyazaki, T. Tamai, A. Kato, S. Kudo, and K. Fujimoto, Ra-diofrequency ablation with the real-time virtual sonography system

    for treating hepatocellular carcinoma difficult to detect by ultrasonog-raphy, J. Clin. Biochem. Nutr., vol. 40, pp. 6672, 2007.[56] L. Mercier, T. Lango, F. Lindseth, and D. L. Collins, A review of cal-

    ibration techniques for freehand 3-D ultrasound systems, UltrasoundMed. Biol., vol. 31, pp. 449471, 2005.

    [57] G. M. Treece, R. W. Prager, A. H. Gee, and L. Berman, Correction ofprobe pressure artifacts in freehand 3D ultrasound, Med. Image Anal.,vol. 6, pp. 199214, 2002.

    [58] G. M. Treece, A. H. Gee, R. W. Prager, C. J. Cash, and L. H. Berman,High-definition freehand 3-D ultrasound, Ultrasound Med. Biol.,vol.29, pp. 529546, 2003.

    [59] O. V. Solberg, F. Lindseth, H. Torp, R. E. Blake, and T. A. N. Hernes,Freehand 3D ultrasound reconstruction algorithmsA review, Ul-trasound Med. Biol., vol. 33, pp. 9911009, 2007.

    [60] A. A. A. Rahni, I. Yahya, and S. M. Mustaza, 2D translation froma 6-DOF MEMS IMUs orientation for freehand 3D ultrasound scan-ning, in Proc. 4th Kuala Lumpur Int. Conf. Biomed. Eng., 2008, pp.

    699702.[61] R. J. Housden, A. H. Gee, R. W. Prager, and G. M. Treece, Rotational

    motion in sensorless freehand three-dimensional ultrasound, Ultra-sonics, vol. 48, pp. 412422, 2008.

    [62] G. R. DeVore and B. Polanko, Tomographic ultrasound imaging ofthe fetal heart: A new technique for identifying normal and abnormalcardiac anatomy, J. Ultrasound Med., vol. 24, pp. 16851696, 2005.

    [63] J. Shang, R. Managuli, and Y. Kim, Efficient arbitrary volumereslicing for pre-scan-converted volume in an ultrasound backend, inProc. IEEE Ultrason. Symp., 2009.

    [64] Y. Kim, J. H. Kim, C. Basoglu, and T. C. Winter, Programmable ul-trasound imaging using multimedia technologies: A next-generationultrasound machine, IEEE Trans. Inf. Technol. Biomed., vol. 1, pp.1929, 1997.

    [65] N. Max, Optical models for direct volume rendering, IEEE Trans.Vis. Comput. Graphics, vol. 1, pp. 99108, 1995.

    [66] M. Levoy, Display of surfaces from volume data, IEEE Comput.Graph. Appl., vol. 8, pp. 2937, 1988.

    [67] H. Pfister, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L. S.Avila, K. M. Raghu, R. Machiraju, and L. Jinho, The transfer functionbake-off, IEEE Comput. Graph. Appl., vol. 21, pp. 1622, 2001.

    [68] D. Honigmann, J. Ruisz, and C. Haider, Adaptive design of a globalopacity transfer function for direct volume rendering of ultrasounddata, in Proc. IEEE Vis. (VIS03), 2003, pp. 489496.

    [69] B. Petersch, M. Hadwiger, H. Hauser, and D. Honigmann, Real timecomputation and temporal coherence of opacity transfer functionsfor direct volume rendering of ultrasound data, Comput. Med. Imag.Graph., vol. 29, pp. 5363, 2005.

    [70] B. T. Phong, Illumination for computer generated pictures, Commun.ACM, vol. 18, pp. 311317, 1975.

    [71] P. Lacroute and M. Levoy, Fast volume rendering using a shear-warpfactorization of the viewing transformation, in Proc. 21st Annu. Conf.Comput. Graph. Interact. Tech., 1994, pp. 451458.

    [72] J. Sweeney and K. Mueller, Shear-warp deluxe: The shear-warp algo-rithm revisited, in Proc. Symp. Data Vis., 2002, pp. 95104.[73] K. Engel, M. Kraus, and T. Ertl, High-quality pre-integrated volume

    rendering using hardware-accelerated pixel shading, in Proc. ACMSIGGRAPH/EUROGRAPHICS Workshop Graph. Hardware, 2001,pp.916.

    [74] J. P. Schulze, M. Kraus, U. Lang, and T. Ertl, Integrating Pre-In-tegration Into The Shear-Warp Algorithm, in PROC. 2003 EURO-GRAPHICS/IEEE TVCG Workshop Vol. Graph., 2003, pp. 109118.

    [75] Y. Wu, V. Bhatia, H. Lauer, and L. Seiler, Shear-image order raycasting volume rendering, in Proc. 2003 Symp. Interact. 3D Graph.,2003, pp. 152162.

    [76] R. Managuli, E.-H. Kim, K. Karadayi, and Y. Kim, Advanced volumerendering algorithm for real-time 3D ultrasound: Integrating pre-inte-gration into shear-image-orderalgorithm, Proc. SPIEMed. Imag.,vol.6147, p. 614702, 2006.

    [77] G. Sakas, L.-A. Schreyer, and M. Grimm, Preprocessing and volumerendering of 3D ultrasonic data, IEEE Comput. Graph. Appl., vol. 15,pp. 4754, 1995.

  • 7/28/2019 journal in electronics and communication

    16/17

    38 IEEE REVIEWS IN BIOMEDICAL ENGINEERING, VOL. 2, 2009

    [78] C. Kim and H. Park, Preprocessing and efficient volume renderingof 3-D ultrasound image, IEICE Trans. Inf. Syst., vol. E83-D, pp.259264, 2000.

    [79] A. N. Evans and M. S. Nixon, Mode filtering to reduce ultrasoundspeckle for feature extraction, Proc. Inst. Elect. Eng. Vision, ImageSignal Process., vol. 142, pp. 8794, 1995.

    [80] V. Shamdasani, U. Bae, R. Managuli, and Y. Kim, Improving the vi-sualization of 3D ultrasound data with 3D filtering, Proc. SPIE Med.

    Imag., vol. 5744, pp. 455461, 2005.[81] U. Bae, V. Shamdasani, R. Managuli, and Y. Kim, Fast adaptive un-sharp masking with programmable mediaprocessors, J. Dig. Imag.,vol. 16, pp. 230239, 2003.

    [82] Q. Sun, J. A. Hossack, J. Tang, and S. T. Acton, Speckle reducinganisotropic diffusion for 3D ultrasound images, Comput. Med. Imag.Graph., vol. 28, pp. 461470, 2004.

    [83] C. R. Castro-Pareja, O. S. Dandekar, and R. Shekhar, FPGA-basedreal-time anisotropic diffusion filtering of 3D ultrasound images,Proc. Real-Time Imag. IX, vol. 5671, pp. 123131, 2005.

    [84] M.-J. Kim, H.-J. Yun, and M.-H. Kim, Faster, more accurate diffu-sion filtering for fetal ultrasound volumes, Image Anal. Recognit., vol.4142, pp. 524534, 2006.

    [85] L. Wang, D. Li, T. Wang, J. Lin, Y. Peng, L. Rao, and Y.Zheng, Filtering of medical ultrasonic images based on a modifiedanistropic diffusion equation, J. Electron. (China), vol. 24, pp.209213, 2007.

    [86] Q. Huang, Y. Zheng, M. Lu, T. Wang, and S. Chen, A new adaptiveinterpolation algorithm for 3D ultrasound imaging with speckle reduc-tion and edge preservation, Comput. Med. Imag. Graph., vol. 33, pp.100110, 2009.

    [87] P. A. Picot, D. W. Rickey, R. Mitchell, R. N. Rankin, and A. Fenster,Three-dimensional colour Doppler imaging, Ultrasound Med. Biol.,vol. 19, pp. 95104, 1993.

    [88] D. B. Downey and A. Fenster, Vascular imaging with a three-dimen-sional power Doppler system, Amer. J. Roentgenol., vol. 165, pp.665668, 1995.

    [89] C. J. Ritchie, W. S. Edwards, L. A. Mack, D. R. Cyr, and Y. Kim,Three-dimensional ultrasonic angiography using power-modeDoppler, Ultrasound Med. Biol., vol. 22, pp. 277286, 1996.

    [90] H. Ohishi, T. Hirai, R. Yamada, S. Hirohashi, H. Uchida, H.Hashimoto, T. Jibiki, and Y. Takeuchi, Three-dimensional powerDoppler sonography of tumor vascularity, J. Ultrasound Med., vol.17, pp. 619622, 1998.

    [91] A. Ozdemir, H. Ozdemir, I. Maral, O. Konus, S. Yucel, and S. Isik,Differential diagnosis of solid breast lesions: Co