from single image to list of objects ... · from single image to list of objects...

11
From Single Image to List of Objects Based on Edge and Blob Detection Rafal Grycuk 1 , Marcin Gabryel 1 , Marcin Korytkowski 1 , Rafal Scherer 1 , and Sviatoslav Voloshynovskiy 2 1 Institute of Computational Intelligence, Cz¸ estochowa University of Technology Al. Armii Krajowej 36, 42-200 Cz¸estochowa, Poland {rafal.grycuk,marcin.gabryel,marcin.korytkowski, rafal.scherer}@iisi.pcz.pl http://iisi.pcz.pl 2 University of Geneva, Computer Science Department, 7 Route de Drize, Geneva, Switzerland http://sip.unige.ch Abstract. In this paper we present a new method for obtaining a list of interest objects from a single image. Our object extraction method works on two well known algorithms: the Canny edge detection method and the quadrilaterals detection. Our approach allows to select only the significant elements of the image. In addition, this method allows to filter out unnecessary key points in a simple way (for example obtained by the SIFT algorithm) from the background image. The effectiveness of the method is confirmed by experimental research. 1 Introduction Content-based image retrieval is one of the greatest challenges of present com- puter science. Effective browsing and retrieving images is required in many vari- ous fields of life e.g. medicine, architecture, forensic, publishing, fashion, archives and many others. In the process of image recognition users search through databases which consist of thousands, even millions of images. The aim can be the retrieval of a similar image or images containing certain objects [4][6]. Retrieving mechanisms use image recognition methods. This is a sophisticated process which requires the use of algorithms from many different areas such as computational intelligence [7][8][12][20][21][22][24][27][29][32], machine learning [23], mathematics [2][13] and image processing [11]. An interesting approach for CBIR, based on nonparametric estimates (see e.g. [14][25][26]) was presented in [10]. One of important aspects of content-based image retrieval is the ability to find specific objects in the image. This is particularly important in the case of large image databases which are searched for specific objects. Many ways to detect objects can be found in the literature and one of them is image segmentation. The goal of the image segmentation is to cluster pixels into salient image re- gions, i.e. regions corresponding to individual surfaces, objects, or natural parts L. Rutkowski et al. (Eds.): ICAISC 2014, Part II, LNAI 8468, pp. 617–627, 2014. c Springer International Publishing Switzerland 2014

Upload: phungthuy

Post on 01-Sep-2018

253 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects

Based on Edge and Blob Detection

Rafa�l Grycuk1, Marcin Gabryel1, Marcin Korytkowski1,Rafa�l Scherer1, and Sviatoslav Voloshynovskiy2

1 Institute of Computational Intelligence, Czestochowa University of TechnologyAl. Armii Krajowej 36, 42-200 Czestochowa, Poland

{rafal.grycuk,marcin.gabryel,marcin.korytkowski,rafal.scherer}@iisi.pcz.pl

http://iisi.pcz.pl2 University of Geneva, Computer Science Department,

7 Route de Drize, Geneva, Switzerlandhttp://sip.unige.ch

Abstract. In this paper we present a new method for obtaining a listof interest objects from a single image. Our object extraction methodworks on two well known algorithms: the Canny edge detection methodand the quadrilaterals detection. Our approach allows to select only thesignificant elements of the image. In addition, this method allows to filterout unnecessary key points in a simple way (for example obtained by theSIFT algorithm) from the background image. The effectiveness of themethod is confirmed by experimental research.

1 Introduction

Content-based image retrieval is one of the greatest challenges of present com-puter science. Effective browsing and retrieving images is required in many vari-ous fields of life e.g. medicine, architecture, forensic, publishing, fashion, archivesand many others. In the process of image recognition users search throughdatabases which consist of thousands, even millions of images. The aim canbe the retrieval of a similar image or images containing certain objects [4][6].Retrieving mechanisms use image recognition methods. This is a sophisticatedprocess which requires the use of algorithms from many different areas such ascomputational intelligence [7][8][12][20][21][22][24][27][29][32], machine learning[23], mathematics [2][13] and image processing [11]. An interesting approach forCBIR, based on nonparametric estimates (see e.g. [14][25][26]) was presented in[10].

One of important aspects of content-based image retrieval is the ability to findspecific objects in the image. This is particularly important in the case of largeimage databases which are searched for specific objects. Many ways to detectobjects can be found in the literature and one of them is image segmentation.The goal of the image segmentation is to cluster pixels into salient image re-gions, i.e. regions corresponding to individual surfaces, objects, or natural parts

L. Rutkowski et al. (Eds.): ICAISC 2014, Part II, LNAI 8468, pp. 617–627, 2014.c© Springer International Publishing Switzerland 2014

Page 2: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

618 R. Grycuk et al.

of objects. There are many methods for image segmentation: thresholding [3],clustering methods [1], histogram-based methods [30], edge detection [28], stereovision based methods [16] and many others. We propose a new method for ex-tracting objects from images. This method consists of two parts: the Canny edgedetection method [5] and the method for quadrilaterals detection [17]. An im-portant element in our algorithm is the additional step which consists in filteringout the key points from the background. The result is an image with key pointsonly in the most interesting areas of the image.

Our method uses the SIFT (Scale-invariant feature transform) algorithm todetect and describe local features of an image. It was presented for the first timein [18] and is now patented by the University of British Columbia. For eachkey point which describes the local image feature, we generate a feature vector,that can be used for further processing. SIFT contains four main steps [15][19]:extraction of potential key points, selection of stable key points (resistant tochange of scale and rotation), finding key point orientation immune to the imagetransformation, generating a vector describing the key point. SIFT key pointscontain 128 values.

The paper consists of several sections. In the next sections we will presentmethods used in the proposed approach and a method for image segmentationalgorithm. The last section presents the experimental results on an original soft-ware written in .NET.

2 Canny Edge Detection

The Canny edge detector [5] is one of the most commonly used image processingmethods for detecting edges. It takes as input a gray scale image, and producesas output an image showing the positions of tracked intensity discontinuities.The algorithm runs in 5 separate steps:

1. Noise reduction. The image is smoothed by applying an appropriate Gaus-sian filter.

2. Finding the intensity gradient of the image. During this step the edges shouldbe marked where gradients of the image have large magnitudes.

3. Non-maxima suppression. If the gradient magnitude at a pixel is larger thanthose at its two neighbors in the gradient direction, mark the pixel as anedge. Otherwise, mark the pixel as the background.

4. Edge tracking by hysteresis. Final edges are determined by suppressing alledges that are not connected to genuine edges.

The effect of the Canny operator is determined by parameters:

– The width of the Gaussian filter used in the first stage directly affects theresults of the Canny algorithm,

– The thresholds used during edge tracking by hysteresis. It is difficult to givea generic threshold that works well on all images.

Page 3: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects Based on Edge and Blob Detection 619

3 Blob Extraction

Blob extraction is one of the basic methods of image processing. It allows todetect and extract a list of blobs (objects) in the image. Unfortunately, obtain-ing homogeneous objects from an image as a list of pixels is extremely com-plicated. Especially when we deal with a heterogeneous background, i.e. theobjects containing multicolored background. There are many methods for ex-tracting objects (blobs) from images [9]. In this paper we use methods imple-mented in the AForge.NET library. These algorithms are described by AndrewKirillov [17]. There are four types of the algorithms: Convex full, Left/RightEdges, Top/Bottom Edges, Quadrilateral. Fig.1 describes these blob detectionmethods. Figure 6A illustrates Quadrilaterals detection method. As can be seen,round edges of the objects are not detected correctly. Much better results areobtained by the Top/Bottom Edges algorithm (Figure 1C). Edges of objectsare detected mostly correctly, with individual exceptions. The Left/Right Edgesmethod behaves similarly (Fig. 1B). The last method has a problem with thedetection of vertices inside figures, e.g. star type objects (Fig. 1D). Quadrilateraldetection method can be described by the following steps:

1. Locate each separate object in the input image,2. Find object edge pixels (methods: Top/Bottom, Left/Right),3. Detect four corners of quadrilateral,

Fig. 1. Comparison of methods for blob detection used in the AForge.NET library [17]

Page 4: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

620 R. Grycuk et al.

4. Set distortionLimit. This value determines how different the object can be.Set 0 to detect perfect quadrilateral,

5. Check how well the analyzed shape fits into the quadrilateral with the as-sumed parameter (see Fig. 1D ).

6. Check each mean distance between given edge pixels and the edge of assumedquadrilateral,

7. If mean distance is not greater then distortionLimit, then we can assumethat the shape is quadrilateral.

4 Proposed Method for Segmentation Algorithm

In this section we present a new approach to segmentation. When we have asingle image described by pixel matrix (e.g. RGB), we can segment it into manyobjects. This section describes a methodology of this process. The algorithmcan be divided into several steps. First of all we need to create mathematicaldescription of the input image. In this step we use the SIFT algorithm. In returnwe obtain a list of key points (whole image). The next step is to create edgedetected image. In this step we use the Canny algorithm described in section 2.Another step detects blobs, with all their properties such as: center of gravity,area, mean color, blob rectangle, position. When we have detected these objectcontours as blobs, we can extract rectangles for these objects. The next stepdetects and saves key points in each rectangle. In other words, we check if thekey point position lies in rectangle area, if so it is assigned to this rectangle.The last step extracts objects from the original (input) image and saves themas separate objects. This method allows to extract the list of objects from theimage, not only as raw part of the image but also with a list of key points as-signed to these objects. In other words, on the output of our method we obtainthe list of objects (raw part) with assigned key points (vectors) to each ob-ject [31]. Figure 2 describes steps of the algorithm, witch can be represented bythe following pseudo code: The blob detection step generates lots of small blobs.

Fig. 2. Block diagram of the proposed method

Page 5: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects Based on Edge and Blob Detection 621

INPUT: Single image, singleInputImageOUTPUT: Segmented image, list of objects with descriptors assign to eachobject, descriptorsList, objectsListallDescriptors := Generate Descriptor(singleInputImage);edgedImage := Canny Edge Detector(singleInputImage, thesh,threshLinking);blobs := Detect Blobs(edgedImage);foreach blob ∈ blobs do

if blob.Area >= minBlobArea thenextractedBlob := Extract Blob(singleInputImage, startPosition,blob.W idth, blob.Heigth);objectsList.Add To List(extractedBlob);extractedDescriptor := Extract Descriptor(allDescriptors,startPosition, blob.W idth, blob.Heigth);descriptorsList.Add To List(extractedDescriptor);

end

end

Algorithm 1. Segmentation steps

The minBlobArea describes min area for extracted objects. All blobs with areaequal or greater will be extracted. For blob detection process we use Quadrilater-als detection method (see section 3). The extraction process (blob and descriptorextraction) is realized simply by copying entire blob rectangle into newly createdimage (pixel by pixel, key point by key point).

Algorithm stages were implemented in .NET C# using AForge and EmguCV libraries. The first and second step is based on Emgu CV. The third wasimplemented using AForge. Two last steps were implemented by the authors.

5 Experimental Results

In this section we present the results of the experiments. For this purpose wecreated simulation application based on described algorithms. On the input wecan put a single image and in return we obtain the list of objects with assignedkey points. Images in simulations were labelled as follows:

– Input images were labelled by characters (i.e. A,B,C...)– Extracted objects were labelled by object number and letter of input image

(i.e. 1A means first objects from image A.)

Figure 3 presents input images of four simulations. Three of them (A,C,D) arefrom the same image class (cars), but they were taken from different perspective.In Fig. 4 we present the results of the proposed method. For each input image(A,B,C,D) we obtain a list of objects (1A, 2A,...,1B, 2B...). As we can see thealgorithm segmented the image properly. Figure 5 illustrates another type of im-ages i.e. astro images, underwater images, many different objects. These imageswere given as the algorithm input and were segmented correctly. Object 5G in

Page 6: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

622 R. Grycuk et al.

Fig. 6 was not entirely extracted. The rest of the objects (image G) were sepa-rated properly. Table 1 and tab. 6 represent segmented key points. First columndescribes object id (i.e. 1A) and the second determine key points count for eachobject. The third represents input image key point count (before segmentation).

Figure 7A presents the input image after key points detection. As can be seen,the SIFT algorithm detects multiple key points in background of the image.They are located both on objects and background. The next step is Cannyedge detection. This part of the algorithm is extremely important. Performingblob detection on the input image is pointless because, the algorithm detectsonly one blob (the entire image). To eliminate this drawback we use Cannyedge detection as preprocessing. Then, on that image type we execute the blobdetection algorithm. After that, we extract those blobs simply by copy the entirerectangle represented by this blob. As can be see in Fig. 7B, not significant keypoints has been reduced. We obtain extracted objects saved as separated images.

Fig. 3. Simulation input images. Experiments A-D.

6 Final Remarks

The presented method is a novel approach for image segmentation. Conductedexperiments proved effectiveness of our method. Algorithm takes a single RGBimage as input and returns a list of objects as output. Most of extracted objectsare detected correctly, but some images contain fragments of other objects. Thisis caused by intersection of these objects. The next step of our research will becolor distinction of obtained objects to remove non homogeneous objects. Ourmethod extracts objects from different types of images, such as: astro images,underwater images and many others.

Page 7: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects Based on Edge and Blob Detection 623

Fig. 4. Objects obtained from images presented in Fig 3. Experiments A-D. Objectlabels correspond with the input image (eg. 1B, 2B .., are objects belongs to inputimage B).

Table 1. Key points extraction, experi-ments A-D. Key points are assigned to eachobject. The labels correspond with Fig. 4.

Blob Object key points Input imagenumber count key points

count

1A 138

7952A 653A 2484A 1605A 175

1B 10472B 5

3B 16

1C 137

1143

2C 1073C 1724C 2015C 396C 1897C 718C 489C 82

1D 43

2472D 413D 604D 77

Table 2. Key points extraction, experi-ments E-G. Key points are assigned to eachobject. The labels correspond with Fig. 6.

Blob Object key points Input imagenumber count key points

count

1E 2418682E 326

3E 281

1F 631412F 53

3F 18

1G 239

835

2G 673G 874G 455G 796G 117G 158G 759G 7010G 3311G 97

1H 264849

2H 301

Page 8: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

624 R. Grycuk et al.

Fig. 5. Simulation input images. Experiments E-H.

Fig. 6. Objects obtained from images presented in Fig 5. Experiments E-H. Objectlabels corresponding with input image (e.g. 1G, 2G,..,) are objects that belongs toinput image G).

Page 9: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects Based on Edge and Blob Detection 625

Fig. 7. Background key points removal simulation. Fig. 7A presents the image afterkey points detection and Fig. 7B after background key points removal.

Acknowledgments. The work presented in this paper was supported by agrant from Switzerland through the Swiss Contribution to the enlarged EuropeanUnion.

References

[1] Barghout, L., Sheynin, J.: Real-world scene perception and perceptual organiza-tion: Lessons from computer vision. Journal of Vision 13(9), 709 (2013)

[2] Bartczuk, L., Przybyl, A., Dziwinski, P.: Hybrid state variables - fuzzy logicmodelling of nonlinear objects. In: Rutkowski, L., Korytkowski, M., Scherer, R.,Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part I. LNCS,vol. 7894, pp. 227–234. Springer, Heidelberg (2013)

[3] Batenburg, K., Sijbers, J.: Optimal threshold selection for tomogram segmen-tation by projection distance minimization. IEEE Transactions on MedicalImaging 28(5), 676–686 (2009)

[4] Bazarganigilani, M.: Optimized image feature selection using pairwise classifiers.Journal of Artificial Intelligence and Soft Computing Research 1(2), 147–153(2011)

[5] Canny, J.: A computational approach to edge detection. IEEE Transactions onPattern Analysis and Machine Intelligence 8(6), 679–698 (1986)

[6] Chang, Y., Wang, Y., Chen, C., Ricanek, K.: Improved image-based automaticgender classification by feature selection. Journal of Artificial Intelligence and SoftComputing Research 1(3), 241–253 (2011)

[7] Cpalka, K.: A new method for design and reduction of neuro-fuzzy classificationsystems. IEEE Transactions on Neural Networks 20(4), 701–714 (2009)

[8] Cpalka, K., Rutkowski, L.: Flexible takagi sugeno neuro-fuzzy structures for non-linear approximation. WSEAS Transactions on Systems 4(9), 1450–1458 (2005)

[9] Damiand, G., Resch, P.: Split-and-merge algorithms defined on topological mapsfor 3d image segmentation. Graphical Models 65(1), 149–167 (2003)

Page 10: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

626 R. Grycuk et al.

[10] Duda, P., Jaworski, M., Pietruczuk, L., Scherer, R., Korytkowski, M., Gabryel,M.: On the application of fourier series density estimation for image classificationbased on feature description. In: Proceedings of the 8th International Conferenceon Knowledge, Information and Creativity Support Systems, Krakow, Poland,November 7-9, pp. 81–91 (2013)

[11] Gabryel, M., Korytkowski, M., Scherer, R., Rutkowski, L.: Object detection bysimple fuzzy classifiers generated by boosting. In: Rutkowski, L., Korytkowski,M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013,Part I. LNCS, vol. 7894, pp. 540–547. Springer, Heidelberg (2013)

[12] Gabryel, M., Nowicki, R.K., Wozniak, M., Kempa, W.M.: Genetic cost opti-mization of the GI/M/1/N finite-buffer queue with a single vacation policy. In:Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A.,Zurada, J.M. (eds.) ICAISC 2013, Part II. LNCS, vol. 7895, pp. 12–23. Springer,Heidelberg (2013)

[13] Gabryel, M., Wozniak, M., Nowicki, R.K.: Creating learning sets for control sys-tems using an evolutionary method. In: Rutkowski, L., Korytkowski, M., Scherer,R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) SIDE 2012 and EC 2012.LNCS, vol. 7269, pp. 206–213. Springer, Heidelberg (2012)

[14] Greblicki, W., Rutkowska, D., Rutkowski, L.: An orthogonal series estimate oftime-varying regression. Annals of the Institute of Statistical Mathematics 35(1),215–228 (1983)

[15] Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R.: Content-based image in-dexing by data clustering and inverse document frequency. In: Mrozek, S.K.D.,Kasprowski, P., Ma�lysiak-Mrozek, B., Kostrzewa, D. (eds.) BDAS 2014. CCIS,vol. 424, pp. 374–383. Springer, Heidelberg (2014)

[16] Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R., Romanowski, J.: Improveddigital image segmentation based on stereo vision and mean shift algorithm. In:Parallel Processing and Applied Mathematics 2013. LNCS. Springer, Heidelberg(2014) (manuscript accepted for publication)

[17] Kirillov, A.: Detecting some simple shapes in images (2010),http://www.aforgenet.com

[18] Lowe, D.G.: Object recognition from local scale-invariant features. In: The Pro-ceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2,pp. 1150–1157. IEEE (1999)

[19] Lowe, D.G.: Distinctive image features from scale-invariant keypoints.International Journal of Computer Vision 60(2), 91–110 (2004)

[20] Nowicki, R.: Rough-neuro-fuzzy system with micog defuzzification. In: 2006 IEEEInternational Conference on Fuzzy Systems, pp. 1958–1965 (2006)

[21] Nowicki, R.: On classification with missing data using rough-neuro-fuzzy systems.International Journal of Applied Mathematics and Computer Science 20(1), 55–67(2010)

[22] Nowicki, R., Rutkowski, L.: Soft techniques for bayesian classification. In: NeuralNetworks and Soft Computing, pp. 537–544. Springer (2003)

[23] Peteiro-Barral, D., Guijarro-Bardinas, B., Perez-Sanchez, B.: Learning from het-erogeneously distributed data sets using artificial neural networks and geneticalgorithms. Journal of Artificial Intelligence and Soft Computing Research 2(1),5–20 (2012)

[24] Przyby�l, A., Cpa�lka, K.: A new method to construct of interpretable models ofdynamic systems. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz,R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2012, Part II. LNCS, vol. 7268,pp. 697–705. Springer, Heidelberg (2012)

Page 11: From Single Image to List of Objects ... · From Single Image to List of Objects BasedonEdgeandBlobDetection RafalGrycuk 1,MarcinGabryel ,MarcinKorytkowski, RafalScherer1,andSviatoslavVoloshynovskiy2

From Single Image to List of Objects Based on Edge and Blob Detection 627

[25] Rutkowski, L.: A general approach for nonparametric fitting of functions and theirderivatives with applications to linear circuits identification. IEEE Transactionson Circuits and Systems 33(8), 812–818 (1986)

[26] Rutkowski, L.: Non-parametric learning algorithms in time-varying environments.Signal Processing 18(2), 129–137 (1989)

[27] Rutkowski, L., Przyby�l, A., Cpa�lka, K., Er, M.J.: Online speed profile generationfor industrial machine tool based on neuro-fuzzy approach. In: Rutkowski, L.,Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2010,Part II. LNCS, vol. 6114, pp. 645–650. Springer, Heidelberg (2010)

[28] Sankaranarayanan, V., Lakshmi, S.: A study of edge detection techniques forsegmentation computing approaches. IJCA, Special Issue on CASCT (1), 35–41(2010)

[29] Starczewski, J.T.: A type-1 approximation of interval type-2 FLS. In: Di Gesu,V., Pal, S.K., Petrosino, A. (eds.) WILF 2009. LNCS, vol. 5571, pp. 287–294.Springer, Heidelberg (2009)

[30] Stockman, G., Shapiro, L.G.: Computer Vision, 1st edn. Prentice Hall PTR, UpperSaddle River (2001)

[31] Tamaki, T., Yamamura, T., Ohnishi, N.: Image segmentation and object extrac-tion based on geometric features of regions. In: Electronic Imaging 1999. Interna-tional Society for Optics and Photonics, pp. 937–945 (1998)

[32] Zalasinski, M., Cpa�lka, K.: Novel algorithm for the on-line signature verification.In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A.,Zurada, J.M. (eds.) ICAISC 2012, Part II. LNCS, vol. 7268, pp. 362–367. Springer,Heidelberg (2012)