chapter1 color image processing

Upload: alan-campos-izaguirre

Post on 14-Apr-2018

310 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Chapter1 Color Image Processing

    1/23

    1

    Chapter 1

    Introduction: Colour Image Processing

    1.1 Introduction

    Colour is a perceived phenomenon, not a physical property of light [San98]. Among all the

    human senses, sight and the perception of colours should be perhaps the most fascinating. Most

    of image processing has been associated with binary or black & white (B&W) images. This is

    largely because of the high cost and limited availability of sensors and processing resources in

    the past. Colour image usually requires about 64K times more data (considering that 8 bits are

    needed for B&W and 24 for colour) to process. Colour has really been studied for the last 25

    years and only recently low cost sensors and computing power are readily available. With the

    appearance of computers with larger amounts of memory and faster speeds, the processing of

    colour images is now more plausible to realise.

    With cheaper sensors a need to explore different low cost processing is needed. Hence, special

    purpose hardware, which gives real-time performance, is a very attractive alternative. This

    thesis explores an alternative of the realisation and implementation of colour image processing

    in dedicated hardware as opposed to that of using a general-purpose machine, i.e. personal

    computer. These realisations would translate into much faster speeds permitting real-time

    processing, which are important to various sectors such as industry, medicine, etc.. Also, by

    having a dedicated hardware, costs are dramatically reduced.

    In this work, a Universal Colour Transformation Hardware (UCTH) system that operates at

    real-time video rate is proposed. The UCTH is capable of performing two objectives. First,

    represent colour in a convenient way and second, implement limited colour image processing

  • 7/30/2019 Chapter1 Color Image Processing

    2/23

    2

    algorithms. Since most colour image processing applications rely on an appropriate colour

    space to carry out the algorithms and several references exist in the literature, the UCTH

    permits transformations for Red Green Blue (RGB) to alternative colour spaces. These can beset-up by the user for a specific application and then realise some particular digital image-

    processing task, e.g. colour segmentation, hue shifting, clustering, edge detection, etc.

    In order to be able to handle and manipulate colour acquired from capturing devices such as

    single-chip or three-chip cameras, it is essential to understand the mechanisms of colour vision,

    the representation of colour and the capabilities and restrictions of colour imaging devices. The

    next sections of this chapter explore these issues and at the end, the organisation of this thesis is

    presented.

    1.2 The history of colour

    Although colour has always existed, it was Sir Isaac Newtons experiments in Trinity College,

    Cambridge in 1666 that established a physical basis for colour [Fau88] [Coh95]. With the use

    of sunlight and a prism, Newton discovered that white light could be made to separate into aseries of different colours. Also, he determined that the effective refractive power of the glass

    varied according to the position of the colour in this sequence, with red deviating least in angle

    (i.e. longest wavelength) and blue the most (i.e. shortest wavelength). The term spectrum was

    created by Newton to describe the ghostly effect quality of this effect.

    A better understanding of light and colour occurred when Thomas Young in 1801 hypothesised

    that human eyes have three receptors and the difference in their responses contributes to the

    sensation of colour [Mac93]. He demonstrated that by overlapping three primaries lights having

    the principal colours of Red, Green and Blue-Violet he could obtain the secondary colours

    Yellow, Cyan and Magenta [Jac94].

    Another way of producing additive colour is by using a spinning disc with coloured regions

    having adjustable sectors and a central coloured area. During the 1850s James Clerk Maxwell

    [Max95], with his discs, found that by adjusting the sectors and spinning the disc fast enough,

    the sectors appear to fuse matching the colour in the central area, a phenomenon now referred

  • 7/30/2019 Chapter1 Color Image Processing

    3/23

    3

    to as trichromatic generalization or trichromacy [Sha97]. Around that time, Helmholtz

    [Mac70] explained the distinction between additive and subtractive colour mixing and

    explained trichromacy in terms of spectral sensitivity curves of the three colour sensing fibresin the eye. Trichromacy indicated the fact that the human eye has three colour receptors known

    as the S, M and L cones for short, medium and long wave sensitivities, respectively, that can

    now be determined through psychophysical experiments [Sto93].

    Before these measurements were possible, Colour Matching Functions (CMFs) were

    determined through psychophysical experiments [Wys00]. Here a normal observer matched any

    spectral light to mixture of three fixed-colour primary lights. The observer sees a split

    (bipartite) fields where a test colour has to be matched by varying the contribution of the 3

    primary lights that generate the mixture. This forms the basis of all colorimetry.

    1.3 Colorimetry

    The science of colour and its measurement is known as colorimetry. The Commission

    Internationale de lEclairage (CIE) is the main organisation responsible of colour metrics andterminology. The first colour specification was developed by the CIE in 1931 and continues to

    form the basis of modern colorimetry [CIE86]. The following terms have been defined by the

    CIE and are given in [Hun91]:

    Brightness: The human sensation by which an area exhibits more or less light.

    Hue: The human sensation according to which an area appears to be similar to one, or

    two proportions of two of the perceived colours red, yellow, green and blue.

    Lightness: The sensation of an areas brightness relative to a reference white in a scene.

    Chroma: The colourfulness of an area relative to the brightness of a reference white.

    Saturation: The colourfulness of an area relative to its brightness.

    A colour, therefore, is a visual sensation produced by a specific spectral power distribution

    (SPD) incident on the retina.

  • 7/30/2019 Chapter1 Color Image Processing

    4/23

    4

    The CIE system works by weighting SPD of an object in terms of the human vision system

    (HSV) by providing two different but equivalent sets of CMFs. The first set, know as the

    CIERGB (Red-Green-Blue) associated with monochromatic primaries at wavelengths of 700,546.1 and 435.1 nm, respectively [Clu72] and is depicted on the left of Figure 1.1. Their radiant

    intensities are adjusted so that the tristimulus values have a constant spectral radiance. The

    second set of CMFs, known as the CIEXYZ are shown in right portion of Figure 1.1. A set of

    three artificial primariesX, YandZwhere created [Fai98] for CIEXYZ to avoid negative values

    appearing in the CIERGB which simplifies operations. CIEXYZ is defined in terms of a linear

    transformation of CIERGB and since an infinite number of transformations can be defined in

    order to meet this non-negativity requirement this is not a physically realisable space [Sha97].

    Figure 1.1: Spectral tristimulus values of the CIE 1931 standard colorimetric observer for

    CIERGB (left) and CIEXYZ (right).

    1.4 Colour models

    Other common terms for colour models include colourspaces, colourcoordinatesystems and

    are three-dimensional (3-D) arrangements of colour sensations where colour are specified by

    points in these spaces [San98]. The colour models are used to classify colours and to qualify

    them according to such attributes as hue, saturation, chroma, lightness or brightness. They are

    also further used for colour matching and are valuable resources for anyone working in any

    medium, e.g. printing, video, image processing, etc.

  • 7/30/2019 Chapter1 Color Image Processing

    5/23

    5

    When considering the variety of available colour systems, it is necessary to classify them into a

    few categories according to their definitions. Figure 1.2 shows the most typical colour systems

    that have been grouped into families of systems. These include hardware-oriented systems,user oriented systems, artificial primary systems, perceptually uniform systems and polar

    coordinate systems. The shaded boxes in Figure 1.2 represent the colour spaces considered

    during this research. Other families of systems include normalised systems and independent

    non-correlated systems. It should be noted that all the colour systems depicted in Figure 1.2

    originate from RGB, which is by far the most used colour space for the acquisition of and

    display of images by using colour cameras and CRT monitors respectively.

    Figure 1.2: Colour Systems and colour spaces. The highlighted colour spaces are

    considered for the work presented in this thesis.

    CIEXYZ

    KodakYCC

    IHS

    HLS

    HSV

    TEKHVC

    RGB

    CMY

    CMYK

    Primary based

    YIQ

    YUV

    YCrCb

    NTSC

    PAL (EBU)

    Digital

    xyz

    rgb

    TV based

    Printing based

    Hardware OrientedSystems

    Normalised

    Systems

    Artificial Primary

    Perceptual User

    Oriented Systems

    CIExyY

    CIEL*a*b*

    CIEu'v'Y

    CIEL*u*v*

    Perceptually Uniform

    Systems

    Y,S,H L,C*,Ho

    Polar Coordinate

    Systems

    OHTA KLT

    Independent Non-Correlated

    Systems

  • 7/30/2019 Chapter1 Color Image Processing

    6/23

    6

    1.4.1 Hardware oriented systems

    These colour systems are device dependant models [Bas92] which means that the colour

    depends on the equipment and the set-up used to produce it. As an example by using a

    computer CRT monitor to display colours, the colour produced using pixel values of RGB =

    0,255,255 (i.e. cyan) will alter as the brightness and contrast is changed on the monitor. In the

    same way, if the monitor was to be replaced, the red, green and blue phosphorus used for the

    screen will have slightly different characteristics and the colour produced will change.

    1.4.1.1 The RGB colour space

    In this model, colours are formed by the combination of the three primary colours red, green

    and blue, which makes RGB and additive system. It forms the most basic and well-known

    colour model and can be seen in Figure 1.3. The different levels for each of the primary colours

    can produce a wide range of colours, i.e. gamut[Gon93]. The benefit of this colour system is

    the ease of implementation and therefore is widely used for colour cameras and computer

    cathode ray terminals (CRT) displays. Among the disadvantages of this colour space we find its

    device dependency, the high correlation of colours and the difficulty of perception by a user.

    Common practice is to assume that colour similarity is inversely proportional to a distance

    metric in that space. This assumption proves inappropriate for RGB, since equal distances in

    this colour space rarely match perceived equivalence in similarity [Sea99].

    Figure 1.3: Two views of the RGB colour space

    Colour image processing algorithms based on RGB can be found in pixel clustering based on

    empty or full bins of colour histograms and are reported in [Nov92]. Potential applications of

  • 7/30/2019 Chapter1 Color Image Processing

    7/23

    7

    RGB to machine vision are discussed from a perspective of physics-based vision and the

    calibration procedure in [Mar96]. Zugaj and Lattuati [Zug98] employed a gradient operator,

    applied on the multi-image RGB in order to produce edges in colour segmentation. A series ofsurfaces closely modelling how the human eye cone receptors respond to RGB changes give the

    red/green and yellow/blue chromatic direction of the surfaces, while rod receptors being

    sensitive to lightness variations give the separation of the surfaces. This produces the RGYB

    colour geometry and is reported in [War90]. One-chip CCD cameras use interpolation methods

    to produce full-colour images in RGB. The design of practical, high quality filter array

    interpolation algorithms on a simple image model are discussed in [Ada97] and [Ada98].

    The normalised components, which are called chromaticity coordinates, only take into account

    chrominance. The chrominance coordinates of the RGB system are denoted rgb, where

    ( )BGRRr ++= , ( )BGRGg ++= and grb =1 . Colour can alternatively be

    represented in the chromaticity diagram by the coordinates ( gr, ) [Wys00] [Van00].

    Values of rgb coordinates are much more stable with changes in illumination than RGB

    coordinates [Ber87]. For this, rgb has been used in applications such as the stability of our

    perception of surface colours despite changes in illumination, i.e. colour constancy [Fin95].Early applications ofrgb can be found in analysis of aerial pictures [Ali79] and edge detection

    [Nev77]. Based on the intersection of histograms, Swain and Ballard [Swa91] using an

    opponent colour axis derived from rgb created an colour indexing system for image retrieval

    based purely on colour properties and not on geometrical properties.

    1.4.1.2 The CMY colour space

    CMY or Cyan-Magenta-Yellow colour space is a subtractive model and is used primarily in

    printing and photography. Printers often include a fourth component, black, to have CMYK.

    Black, symbolised by K, is substituted for equal parts of CMY to lower the costs of ink and to

    generate a pure black. Subtractive colours are seen when pigments in a object absorb certain

    wavelengths of white, while reflecting the rest. The wavelengths that are reflected as opposed

    to being absorbed are the ones perceived as colour.

  • 7/30/2019 Chapter1 Color Image Processing

    8/23

    8

    An application has been found by Verikas et al. [Ver97] to determine colours of inks used to

    produce multi-coloured pictures created by printing dots of cyan, magenta, yellow and black

    primary colours upon each other through screens having different raster angles. Colourreproduction process many time relies in three (CMY) or four (CMYK) colours and Herzog

    [Her96] proposed a new analytical method to represent a surface of a colour gamut using a

    closed expression directly in CIEL*a

    *b

    *based on the similarity of the gamut for the CMY cube.

    1.4.1.3 The YIQ colour space

    The National Television Standards Committee (NTSC) of the United States uses a colour

    specification consisting of luminance (Y) and two difference colour signals called inphase and

    quadrature (I and Q) [Hut90]. The YIQ exploits a useful property of our vision system. The

    system is more sensitive to changes in luminance than changes in hue and saturation (i.e.

    colour); that is, our ability to discriminate spatially colour information is weaker than our

    ability to discriminate spatially monochromatic information [Fol90]. This result is an I-axis

    encoding chrominance information along a blue-green to orange vector, and a Q-axis encoding

    chrominance information along a yellow-green to magenta vector.

    Applications of the YIQ space have proven to be useful in colour image coding [Ove95]

    [Van94] to obtain image compression. An image retrieval system named PicSOM [Laa99] uses

    the Y component to perform image querying on the World Wide Web (WWW). Speech

    recognition based on video represented in YIQ for lip tracking combined with acoustic signals

    can be found in [Hen95]. Boo and Bose [Boo] proposed a procedure to restore a single colour

    image, which has been degraded by shift-invariant blur in the presence of additive stationary

    noise. Reduction of up to 50% in the size of the frame buffer used with colour palettes to

    display images on CRT computer screens is achieved by using vector quantization in the YIQ

    colour space [Wu96].

    1.4.1.4 The YUV colour space

    This is a specification for the European Broadcast Union (EBU) and PAL and SECAM

    television systems in Europe and in many other countries use the system [Car69]. It consists of

  • 7/30/2019 Chapter1 Color Image Processing

    9/23

    9

    a luminance Ysignal and two colour difference signals Uand Vthat are used as transmission

    coordinates. The YUV coordinate system was initially proposed as the NTSC transmission

    standard, but was replaced by the YIQ system because it was found that the I and Q signalscould be reduced in bandwidth to a greater degree than U and V for an equal level of visual

    quality [Pra91].

    The effect of colour quantization schemes on the performance of image retrieval with colour

    clustering using YUV, RGB, HSB and CIEL*u*v* can be seen in [Wan98]. A novel technique

    for efficient coding of texture to be mapped on 3-D surfaces using digital maps represented in

    YUV format is given in [Hor97]. A real-time algorithm for extracting shape parameters from

    facial features such as eyes and mouth has been carried out in [Rao96]. YUV colour space has

    proved to be useful in the Motion Picture Experts Group (MPEG) encoding [Tor97] [Kru95].

    Robotic vision using YUV colour model also is reported in the literature [Sch97] [Nak98].

    1.4.1.5 The YCrCb colour space

    This is a space that is independent of the TV signal coding systems and is primarily oriented to

    digital television [Rob97]. The component Y for luminance is identical to that for YUV and

    YIQ. The chromatic information is found in the Cr(colour red) and Cb (colour blue) signals.

    Current applications in image compression (e.g. JPEG format) often employ YCrCb model as a

    quantization space [Coo93] [Ler95] or also used to encode the overall prefix codes which result

    from many forms of image compression algorithms such that they are largely independent of

    each other [Whi98]. The chrominance sub-sampling information and the possible degrading of

    colour in images is enhanced by a methodology proposed by Schmitz [Sch97a]. Localisation of

    facial regions in videophone images using both the luminance and chrominace is another

    application of YCrCb researched in [Cha96].

  • 7/30/2019 Chapter1 Color Image Processing

    10/23

    10

    1.4.2 User oriented systems

    Other names given to the user oriented systems include: perceptualcoloursystems,perceptual

    orientedsystem or computergraphics colourspace. These systems decouple the lightness,

    brightness or value information from the chromatic information.

    The best known and most influential of all models based on perceptual principles was proposed

    in 1905 by Albert Munsell and is known as the Munsellcoloursystem which is still widely in

    use to date [Mun76]. Munsell modelled his system as an orb around whose equator runs a band

    of colours as can been observed in Figure 1.4. The axis of the orb is a scale of neutral grey

    values with 10 divisions with white at the top and black at the bottom. Extending out, from the

    axis at each grey value is a graduation of colour progressing from neutral grey to full saturation.

    Munsell introduced three aspects that would describe any thousand of colours; these are

    described in Munsell terms as:

    Hue: The quality by which we distinguish one colour from another. Munsell selected

    five primary colours: red, yellow, green, blue and purple; five intermediate colours:

    yellow-red, green-yellow, blue-green, purple-blue and red-purple and made a wheel

    defining 100 compass points.

    Value: The quality by which we distinguish a light colour from a dark one. Value is a

    neutral axis that refers to the grey contents of the colour. It ranges for black to white.

    Chroma: The quality that distinguishes the difference from a pure hue to a grey scale.

    The chroma axis extends from the value axis at a right angle.

    A method of expanding colour images in terms of Munsells three aspects or attributes of

    colour perception can be found in [Tom87].

    1.4.2.1 The IHS colour space

    Known sometimes as the Hue-Saturation-Intensity (HSI, IHS) model and is an intuitive model

    for specifying colours. Hue refers to the name given to as colour (e.g. red, yellow, etc.) and is

    represented as degrees on a colour wheel with values from o0 (red) to o120 (green) to o240

  • 7/30/2019 Chapter1 Color Image Processing

    11/23

    11

    (blue) to o360 (red again). Saturation is the purity of the colour. Low saturation (< 20%) results

    in grey, regardless of the hue; middle saturation (40% to 60%) produces pastels; and high

    saturation (> 80%) results on vivid colours. Intensity is the brightness of a colour and ranges

    from 0% (black) to 100% (white). Sometime intensity is also referred to as luminance or

    lightness [Gon93].

    Figure 1.4: Munsell's colour space.

    Several image processing functions can be realised using the IHS colour space, these include

    segmenting colour using the three colour attributes: hue, saturation and intensity for a robust

    specialised architecture for real-time tracking [Gar96]. Remote Sensing is another popular area

    where IHS colour model is appropriate for monitoring various natural resources and

    environmental hazards. Geohazard assessment using synthetic aperture radar1 (SAR) and

    thematic mapped (RM) images are used to characterise areas affected by landslides and costal

    hazards in the lower Fraser Valley within the Canadian Rockies [Sin95]. Other applications

    include the study of soil salinity and alkalinity dynamics [Dwi98], image fusioning techniques

    to exploit the complementary nature of multi-sensor image data [Sch98] [Sun98],

    discriminating areas of hydro thermally altered material in vegetated terrain in central Brazil

    1 Many people associate the word aperture with photography, where the term represents the diameter of the lens'

    opening. The camera's aperture then determines the area through which light is collected. Similarly, a radar antenna's

    length partially specifies the area through which it collects radar signals. The antenna's length is therefore also called

    its aperture. In general the larger the antenna, the more unique information you can obtain about a particular viewed

    object. With more information, you can create a better image of that object (improved resolution). It's prohibitively

    expensive to place very large radar antennas in space, however, so researchers found another way to obtain fine

    resolution: they use the spacecraft's motion and advanced signal processing techniques to simulate a larger antenna.

    A SARantenna transmits radar pulses very rapidly. In fact, the SAR is generally able to transmit several hundred

    pulses while its parent spacecraft passes over a particular object. Many backscattered radar responses are therefore

    obtained for that object. After intensive signal processing, all of those responses can be manipulated such that the

    resulting image looks like the data were obtained from a big, stationary antenna. The synthetic aperture in this case,therefore, is the distance travelled by the spacecraft while the radar antenna collected information about the object.

  • 7/30/2019 Chapter1 Color Image Processing

    12/23

    12

    [Alm97] and Mediterranean vegetated coastal area classification [Gri97]. Other areas of study

    include the analysis of skin lesions [Fis96].

    1.4.2.2 The HSV colour space

    The HSV model, created by Smith [Smi78] (also called the HSB model, withB for brightness)

    is an user-oriented colour model, based on the intuitive appeal of the artists tint, shade and tone

    [Fol90]. The coordinate system is cylindrical and the subset of the space within the model is

    defined as a hexcone, as it resembles a six-sided pyramid. Value, orV, ranges from black (0) to

    white (1). Hue, orH, is measured by the angle around the vertical axis, with red ato

    0 , green at

    o120 , blue at o240 , and back to red at o360 . Complementary colours are displaced by o180

    and these are opposite one another. The saturation, orS, is the purity of the colour. Therefore,

    saturation of 100% in the model represents a pure colour and a saturation of 0% represent a

    grey level. Figure 1.5 shows a 3 dimensional model of the HSV colour space.

    Figure 1.5: HSV colour space. Image on left represent a full view of the colour space,

    whilst view on right is a cross-section.

    Some important advantages of the HSV colour space are: good compatibility with human

    intuition and decoupling of the chromatic values for the achromatic values. Segmentation and

    tracking of faces or facial regions in colour images where skin like regions are determined

    based on colour attributes, hue and saturation, is a common application of HSV [Sob96]

    [Her99] [Yan00]. Another common area for this colour space is in the research of multimedia

    information, e.g. images and video [Meh97] [Xio95] where the images are usually retrieved via

    some query method [And99]. Applications can also be found for the identification of biological

    objects at microscopic level for cell structures [Pav96] or for automatically detecting danger

  • 7/30/2019 Chapter1 Color Image Processing

    13/23

    13

    labels on the back of containers [Jr96]. Based on the extraction of gradient discontinuities,

    edge detection can be achieved using HSV [Tsa96]. Robotic vision [Pri00] and autonomous

    vision-based avoidance systems [Lor97] have also been reported using this model.

    1.4.2.3 The HLS colour space

    Another model based on intuitive colour parameters is the HLS system used by Tektronix

    [Tek90] for some of its terminals. This model has a double-cone representation that can be

    thought as a deformation of HSV, in which white is pulled upward to form the upper hexcone

    from the 1=V plane. The three parameters in this model are called hue (H), lightness (L) and

    saturation (S) and can be seen in Figure 1.6. Hue has the same meaning as in the HSV and IHS

    models. The vertical axis is this model is called lightness. At 0=L we have black, and white is

    at 100=L %. Grey scale is along the L axis and the pure hues lie on the 50=L % plane.

    Saturation gives the relative purity of colour, hence when =L 50% and =S 100% a pure colour

    is generated.

    Tektronix later created the TEKHVC (Hue, Value and Chroma) [Bas92] colour system, that

    resembles the HLS only in shape but was created to have a perceptually uniform colour space

    in which measured and perceived distances between colours are approximately equal [Fol90].

    Some applications using HLS can be found using cluster detection combined with probabilistic

    relaxation, where clusters are extracted for specific colour from HLS to locate tumours from a

    bladder image [Che98]. In order to visualise trajectories in higher dimensional dynamical

    systems, the HLS model has been employed in [Weg97].

    1.4.3 Perceptually uniform systems

    The colour models, developed by the Commission Internationale de lEclairage (CIE) [CIE86]

    have the following objectives:

    1. To be completely independent of any device or other means of emission or

    reproduction.

  • 7/30/2019 Chapter1 Color Image Processing

    14/23

    14

    2. Perceptual colour differences recognised as equal by the human eye would correspond

    to equal Euclidean distances.

    Figure 1.6: The HLS colour space

    As perceptually uniform colour spaces of this kind, two spaces recommended by the CIE in

    1976 are mainly in use today. These are the CIEL*a

    *b

    *space (used for reflected light) and the

    CIEL*u

    *v

    *space (mainly used for emitted light) [Wys00].

    The CIE colour model was developed to match as closely as possible on how humans perceive

    colour. The key elements of the CIE model are the definitions of the standard sources and the

    specifications for a standard observer [Fai98]. The following CIE standard sources were

    defined in 1931:

    Source A: A tungsten-filament lamp with a colour temperature of 2845K.

    Source B: A model of noon sunlight with a temperature of 4800K.

    Source C: A model of average daylight with a temperature of 6500K.

    Source D: Illuminant called daylight D series. Illuminant D65 (D65) with a temperature

    of 6500K is the most common referenced.

  • 7/30/2019 Chapter1 Color Image Processing

    15/23

    15

    1.4.3.1 The artificial primary CIEXYZ colour space.

    As mentioned in Section 1.3, the CIE standard considered the tristimulus values for red, green

    and blue to be undesirable for creating a standardised colour model because of their inclusion

    of negative values (Figure 1.1, left side) which were difficult to mathematically manipulate in

    1931. Instead, they use a mathematical formula to convert the RGB data to a system that uses

    only positive integers in order to simplify operations. The reformulated tristimulus values were

    given identifiers XYZ and are shown in Figure 1.7. These values do not directly correspond to

    red, green and blue but are a close approximation. The curve for the Yvalue is equal to the

    curve that indicates the human eyes response to the total power of the light source. For this, Y

    is called the luminance factor and the XYZ values have been normalised so that Yalways has a

    value of 100 [Wys00].

    Figure 1.7: The CIEXYZ artificial primary.

    Obtaining the XYZ tristimulus values is only part of defining the colour and the colour itself is

    easier understood in the term of hue and chroma. To make this possible, CIE used XYZ

    tristimulus values to formulate a set of normalised chromaticity coordinates that are denotedxyz

    (lowercase XYZ).

    The chromaticity coordinates are used in conjunction with a chromaticity diagram with the

    most familiar one being CIEs 1931 xyY, or CIExyY [CIE86], chromaticity diagram. Thex and

    y serve as a locator for any value of hue and chroma. The third dimension, Y, is the lightness or

    luminance of the colour. It extends to white from a spot perpendicular in thex andy plane. As

  • 7/30/2019 Chapter1 Color Image Processing

    16/23

    16

    the Yvalue increases and the colour becomes lighter, the range of colours, orgamut, decreases

    so that the colour space at 100=Y is just a small section of the original area.

    Many attempts have been made to have chromaticity diagrams which are perceptually more

    uniform. Although there has not been a successful attempt to convert a nominal scale into an

    interval sacke, it is worth mentioning one of the results obtained. This is actually the

    chromaticity diagram currently recommended by the CIE for general use: the CIE 1976

    Uniform Chromaticity Scales (UCS) [CIE86] [Fai98]. CIEuv

    Y represents this system, where u

    and vare the coordinates of the chromaticity diagram and Ythe luminance.

    Applications using CIExyY alone are not commonly found in literature, but an automatic

    system for the detection of human faces combining skin-colour image segmentation with shape

    analysis showing cumulative distributions and comparing them to HSV is given in [Ter98].

    1.4.3.2 The CIEL*a

    *b

    *colour space

    The CIEL*a

    *b

    *system was adopted by CIE in 1976 as a model that better showed uniform

    colour spacing in their values [CIE86]. It is an opponent colour system based on the earlier

    (1942) system of Richard Hunter [Hun91] [Rob90] calledL, a, b. It consists of a (a*, b

    *) colour

    plane that maps the hue of a colour on two dimensions: a (horizontal) dimension, separating red

    colours on the right (a+) from green colour on the left (a-), and the b (vertical) dimension,

    separating yellow colours at the top (b+) from blue at the bottom (b-). The position of a colour

    in the space represents the overall contribution of red or green, and blue or yellow to its hue.

    Orange is a combination of red (a+) and yellow (b+), so orange appears in the first quadrant

    (a+, b+) of the space. This is clearly depicted in Figure 1.8.

    Many colour image processing algorithms, such as coding, segmentation, gamut mapping, etc.

    choose CIEL*a

    *b

    *colour space for its desirable properties [Woe96] [Dub97]. Using hue-

    correcting look-up tables, Braun and Fairchild [Bra98] made a series of experiments to test the

    utility of using constant hue visual data to linearize the CIEL*a*b* space with respect to hue and

    present their results.

  • 7/30/2019 Chapter1 Color Image Processing

    17/23

    17

    Figure 1.8: The CIEL*a

    *b

    *colour space. Figure on left shows the main axis and figure on

    right a 3-D model of the colour space.

    Perhaps, CIEL*a

    *b

    *is one of the more popular colour spaces for image processing and it has

    been used in a number of applications and a few examples are listed next.

    Colour removal of dyes that cause pollution originating from textile mill has been studied in

    [Yeh95]. Vallart et al. [Val94] used CIEL*a*b* for the measurement of colour for the study of

    painted works of art. A model and mathematical formulation for describing the light scattering

    and ink spreading phenomena in printing can be found in [Emm00]. Using the L* and b*

    (yellowness) parameters, a method was proposed in [Bha97] to control the temperature of

    barrel and screw speed for the extrusion of a rice blend. Techniques for examining the effects

    of heating soymilk were described in [Kwo99]. A method for embedding invisible information

    in colour images (i.e. watermarking) was proposed in [Fle97].

    Extensive experimentation has been done on CIEL*a*b* areas such as gamut mapping

    techniques [Mon97], obtaining figures of merit to evaluate how close CIEL*a

    *b

    *matches the

    actual perceived accuracy for cameras and scanners [Sha97a] [Har98] [Har99] [Har00].

    In order to measure colour reproduction errors of digital images, Zhang and Waddell [Zha96]

    proposed and extension to the CIEL*a

    *b

    *colour metric called s-CIEL

    *a

    *b

    *and applications of

    this extension can found in [Zha97].

  • 7/30/2019 Chapter1 Color Image Processing

    18/23

    18

    1.4.3.3 The CIEL*u

    *v

    *colour space

    CIEL*

    u*

    v*

    originates from a series of chromaticity diagrams that where inadequate because the

    two-dimensional diagram failed to give uniformly-space visual representation of what is

    actually a three-dimensional colour space [Wys00]. It originated first from CIExyY in 1931,

    next a new set of values ),( vu that presented a visually more accurate two-dimensional model

    was proposed by CIE, giving CIEuvY. However, this was still found unsatisfactory and in 1975,

    CIE proposed modifying the ),( vu diagram yielding the new )','( vu values creating CIEuv

    Y

    that has much better visual uniformity. The final successor, in an attempt to further improve the

    )','( vu diagram, gave the CIEL*u

    *v

    *, whereL replaces Y. This colour space is also based on the

    opponent-colour theory, which models the human colour vision. In this colour space u*

    represents the red-green coordinate, whilst v*

    axis represents yellow-blue. TheL*

    axis describes

    variations in lightness. Shown in Figure 1.9 is a 3-D image of the CIEL*u*v* colour space.

    Figure 1.9: The CIEL*u

    *v

    *colour space.

    Many application can be found based on the CIEL*u

    *v

    *colour space. It has proved very useful

    in colour image segmentation methods that use a clustering approach [Sch93] [Uch94] [Hea96].

    Recognition and localization of two-dimensional objects on a known background is another

    application [Kaa97]. Using lightness and saturation, Liu and Yan [Liu94a] enhanced edges on

    colour images. Histogram manipulation is also popular in CIEL*u*v* to enhance images [Mls96]

    and to retrieve images from video databases [Par99].

    A study to examine the relationship between colorfulness judgments of images of natural

    scenes and statistical parameters of chroma distribution over the images can be found in

    [Yen97]. An identification experiment of naming 35 colours with no previous training was

  • 7/30/2019 Chapter1 Color Image Processing

    19/23

    19

    done by [Der95] and an association of CIE colorimetry and colour displays is given by Schanda

    [Sch96].

    1.4.4 Polar coordinate systems

    Polar coordinate systems are basically a rectangular to polar coordinate transformation from a

    particular colour space. For TV colour spaces (e.g. YIQ, YUV and YCrCb) a vector having a

    magnitude S(Saturation) and an angleH(Hue) is obtained from the chroma channel expressed

    in rectangular form, whilst the luminance Yremains unaltered. For perceptually uniform colour

    spaces (e.g. CIEL*u*v* and CIEL*a*b*) a vector is obtained with its magnitude given by C*

    (Chroma) and an angle oH (Hue). Refer to Figure 1.2.

    1.4.5 Independent non-correlated systems

    These systems are also known as the statistically independent component systems. They can be

    determined by different methods in order to obtain non-correlated components. As shown in

    Figure 1.2, Ohta [Oht80] defines a colour system using I1,I2 andI3 using the Karhunen-Loeve

    Transformation (KLT) [Loe55]. This transform also is commonly referred to as the

    eigenvector,principalcomponent, orHotellingtransform and is based on statistical properties

    of vector representations. These systems are not included or studied in this thesis, but are

    suggested in Chapter 7 as future research.

    1.5 Motivation and objectives behind this work

    After reviewing the literature for colour image processing algorithms and the most frequently

    colour spaces used for their implementations it was decided to think of a common methodology

    that could realize transformations from RGB to most of them. RGB is by far the most widely

    used colour space for image acquisition (e.g. still cameras, video cameras, etc.) and displaying

    images (e.g. CRT monitors). With colour image processing becoming increasingly more

    popular and with the drop in cost of imaging devices and hardware it is now possible to face

  • 7/30/2019 Chapter1 Color Image Processing

    20/23

    20

    many challenges in optimizing design into a relatively low cost technology. For this, it was

    decided not only to devise a universal transformation procedure, but to also implement it in

    hardware with the objective of having a low cost and high-performance system. The objectivesof this research are summarized next:

    1. Devise a common straightforward methodology capable of converting RGB to the most

    commonly used colour spaces reported in literature taking into account that images are

    obtained from charged coupled devices (CCD) still-cameras or video-cameras that

    generate outputs in the RGB colour space. Also, images will frequently be displayed in

    devices based on RGB (e.g. CRT computer monitors).

    2. Analyze and manipulate the conversion methodology considering always that it should

    be implemented in hardware.

    3. When implemented in hardware, real-time conversions rates should be expected,

    matching or superseding the frame-rates given by video cameras with full resolution

    and colour depth.

    4. Design an architecture flexible enough to permit expansion and interconnectability with

    other hardware elements in order to perform some colour image processing algorithms

    as well.

    The colour spaces that will be considered during this work are summarized in Table 1.1

    Colour System Sub-system Colour space considered

    Television based systems YIQ, YUV, YCrCbHardware oriented

    system Printing systems CMY

    Perceptual user oriented

    systemIHS, HLS, HVS

    Artificial primary system CIEXYZPerceptually uniform

    system CIE systems CIExyY, CIEL*a*b*,

    CIEuv

    Y, CIEL

    *u

    *v

    *

    Table 1.1: Colour spaces considered during this research

    Some hardware applications for real-time colour image processing and colour space

    transformations can be found in the literature. A real-time tracking system based on Field

    Programmable Gate Arrays (FPGAs) based on colour segmentation and using the IHS colour

    space was reported in [Gar96]. FPGAs where also used by Mribout et al. [Mer93] to

  • 7/30/2019 Chapter1 Color Image Processing

    21/23

    21

    implement edge detection and edge tracking, while Adraka [And96] devised a dynamic

    hardware video processing platform. Reconfigurable logic [Woo98] using the XC6200 from

    Xilinx

    is another methodology of implementing real-time processing. Very Large ScaleIntegration (VLSI) implementations for Application Specific Integrated Circuits (ASICs) and

    manipulation of digital colour images in the IHS colour space can be found in [And95] and

    using the CIEL*a

    *b

    *in [And95a].

    Integrated circuit manufacturers have been also producing dedicated circuitry for colour image

    processing applications. Altera

    has a RGB to YCrCb and YCrCb to RGB colour space converter

    operating in real-time [Alt97]. Edge detection [Atm00] and 33x convolution [Atm00a] are also

    possible using Atmel AT6000 FPGAs. Crystal semiconductors also created a digital colour-

    space processor [Cry97] operating in the YCrCb colour space for CCD cameras.

    1.6 Organisation of this thesis

    Chapter 2 first looks at the equations needed to make transformations from RGB to alternative

    colour spaces. Most of the image acquisition devices are based on the RGB colour space;

    therefore this chapter reviews the algorithms, equations, procedures reported in literature to

    obtain conversions. Next and after analyzing these algorithms a generalization procedure will

    be proposed which is suitable to cope with all variants of the transformation procedures. This

    will be done having in mind that it should also be suitable for hardware implementation. Three

    stages will be identified that will cope with every possible transformation scenario and are

    explained. Moreover, the last section of the chapter determines all the ranges of possible values

    used by the generalised conversion procedure. This is of vital importance for hardware

    implementation to consider the resolution and size of all units that will carry out the arithmetic

    operations and to determine the word and bus sizes.

    Chapter 3 will explain in detail how the generalized transformation methodology described in

    Chapter 2 is realized in hardware. A Universal Colour Transformation Hardware (UCTH)

    based on Field Programmable Gate Arrays (FPGAs) and Look-up Tables (LUTs) capable of

    performing real-time colour transformations is outlined and implemented here. LUTs will

    contain the final values of the resulting transformed colour space, whilst the FPGAs will be

  • 7/30/2019 Chapter1 Color Image Processing

    22/23

    22

    reducing the size and generating the address buses for each and every look-up table. To further

    simplify equations, a cascaded LUT configuration is also proposed and implemented. This

    chapter will also be covering how floating-point numbers will be represented using fixed-pointvalues on which mathematical operations can be performed minimizing quantization errors. The

    next sections of this chapter will be devoted to implement each of the three stages mentioned in

    Chapter 2, which are: a Matrix Multiplier (MM), a Functional Mapping Uunit (FMU) and

    Switching Interconnect Logic (SIL). At the end of Chapter 3, the devices used for the

    implementation will be described.

    Chapter 4 will be devoted to fully test the operation and functionality of the UCTH. Two tests

    will be performed: namely a static test with fixed images and a dynamic test with video images.

    During the static tests, the results generated by modeling the transformations in software,

    simulating the circuitry using the design entry tools and hardware simulations will be verified

    and conclusions will be drawn. A formal verification procedure is established to demonstrate

    the functionality and precision of the architecture as a whole. With the dynamic testing of the

    circuitry, conversion speeds will be obtained to prove the UCTH capability to handle real-time

    transformation speeds. A series of support circuitry and units are designed to interface the

    UCTH with input and output devices. These will include, a Colour Conditioning Unit (CCU), a

    Digitizing Unit (DU), a Filtering Unit (FU) for noise removal and finally generating video

    signals that can be displayed on a CRT monitor. Furthermore, this chapter will be incorporating

    a section to the UCTH named the colour image processing unit (CIPU) that will be able to

    carryout some image processing algorithms to demonstrate the flexibility of the UCTH.

    Chapter 5 will be implementing two different colour image-processing algorithms using the

    CIPU. First, an image segmentation algorithm based on colour clustering will be realised. It

    will be based on the CIEL*

    a*

    b*

    colour space. Regions of pixels will be grouped together based

    on their colour properties and delimited by planes creating a cluster volume. Second, a colour

    defective vision graphics display will be put into operation also showing the versatility of the

    UCTH and the CIPU module. Basically it permits people with no colour deficiencies,

    whatsoever, to see through the eyes of people with some kind of colour blindness. The

    objectives and uses of these realisations will also be explained.

    Chapter 6 deals with the design and implementation in hardware of a 2-D median filter for the

    removal of impulse noise from images. By using a novel rank adjustment technique that

  • 7/30/2019 Chapter1 Color Image Processing

    23/23

    prevents implicit sorting and moving values around, a highly repetitive structure is obtained

    which is suitable for hardware completion. A clear procedure of how the technique works is

    given by means of an illustrative example. The filter is capable of operating in real-time givinga median output value every clock cycle regardless the size of the input mask. The front-end of

    the filter will be converting a linear stream of pixels originating from a CCD camera to a matrix

    form with the aid of First-In First-Out (FIFO) memories. At the back-end of the filter, a field

    memory will be holding the filtered channel. This chapter also reviews more appropriate used

    filters for higher dimension spaces, i.e. vector directional filters (VDF) and vector median

    filters (VMF).

    Chapter 7 contains the conclusions and suggestions for future work. Comments and

    conclusions will be given for the results obtained in every chapter. Finally, suggestions for the

    line of research that can be followed in this work will be highlighted.