a master thesis - welcome < computer vision laboratory contents chapter 1 introduction ....1...
TRANSCRIPT
Estimation of Phylogenetic Relationships of Japanese Native Fowls using 3-dimensional Image Recognition
3 次元画像認識を用いた日本在来ニワトリの系統関係の推定
by Yasuhiko Uehara
上原 康彦
A Master Thesis
修士論文
Submitted to
The Department of Computer Science The Graduate School of Information Science and Technology
The University of Tokyo February 2004
In partial fulfillment of the requirements For the Degree of Master of Information Science and Technology
Thesis Supervisor: Katsushi Ikeuchi 池内 克史
Abstract In biology, there are two approaches for estimating phylogenetic relationships be-
tween related breeds of fowls: the gene analysis and the morphological analysis meth-
ods. The gene analysis method involves (1) analyzing the relationships directly using
ancient documents, etc., and (2) analyzing fowls' genes using their blood, e.g., DNA etc.
In the morphological analysis method, only lengths of specific parts of their skeletal
specimens are measured and then the Principal Component Analysis estimates the phy-
logenetic relationships.
Because it is difficult to distinguish an individual difference such as the size of each
individual, as well as the interspecies morphological difference by such a method, we
consider analyzing in detail by using techniques of 3-dimensional image recognition, in
which much progress has recently occurred.
In this paper, we describe how we measured skulls of Japanese native fowls with a
laser scanner and generated precise 3-dimensional geometric models of their skulls; we
then propose a new method to estimate phylogenetic relationships of Japanese native
fowls by analyzing cross-sectional images extracted from the models and by analyzing
the Spherical Attribute Image (SAI) generated from the 3-dimensional geometric mod-
els. In addition, we propose a method for rapid generation of SAIs where distances be-
tween any two neighboring nodes are equal.
論文要旨
生物学において、動物種間の系統関係を推定するためには、遺伝子を解析する遺伝的手法と骨
格の形状を解析する形態学的手法が用いられている。この形態学的手法では、動物の骨格標本
上の、複数の部位の長さのみを測定し、多変量解析法を用いて系統関係を推定していた。
このような手法では、個体ごとの体格の大小のような種内のローカルな差異(個体差)と種間のグ
ローバルな差異を明確に区別することは困難である。そこで、近年発達してきた 3 次元画像認識
の手法を用いて、より詳細に解析する手法を実現することを考える。
本論文では、レーザースキャナを用いて日本在来種のニワトリの頭骸骨標本の 3 次元形状を測
定して 3 次元幾何モデルを生成する。そして、その断面曲線を抽出しての 2 次元画像解析、およ
び 3 次元幾何形状から生成された SAI (Spherical Attribute Image) の解析を行うことによって、ニ
ワトリの種間の系統関係を推定する手法を提案する。また、辺の長さを均一化した SAI を構築す
る高速な手法を提案する。
Acknowledgements
It would be my great honor and pleasure to receive suggestions and advice on
this topic from H.I.H. Prince Akishino, whose inspiring and valuable guidance
will enable me to obtain the excellent results described in this thesis. I gratefully
thank him for his suggestions and advice.
I extend my sincere appreciation to my thesis supervisor, Professor Katsushi
Ikeuchi, who introduced me to the various computer vision techniques essential
for designing my analysis methods.
I would also like to thank Vice President and Professor Yoshihiro Hayashi of the
University of Tokyo, Professor Takashi Amano and Ms. Yasuko Ino of Tokyo
University of Agriculture for their suggestions and advice about phylogenetic re-
lationships of Japanese native fowls, especially from a biological viewpoint. This
viewpoint plays a very important role in demonstrating the reliability of my
methods.
In addition, I would like to thank members of the Computer Vision Laboratory,
Institute of Industrial Science, the University of Tokyo; in particular, I would like
to thank Drs. Atsushi Nakazawa and Jun Takamatsu, for their advice, help, and
encouragement.
Last, but not least, I would like to thank my family for their support during my
time as a student.
i
Contents Chapter 1 Introduction……………………………………………………………………………....1
Chapter 2 Generation of 3-Dimensional Geometric Models………………………………………4
2.1 Measurement………………………………………………………………………………………4
2.2 Alignment……………………………………………………………………………………….....6
2.3 Merging………………………………………………………………………………………..…..7
Chapter 3 2-Dimensional Analysis of Cross-Sectional Image ……....………...…………………..8
3.1 2-dimensional methods for analyzing 3-dimensinoal models……….……………………………8
3.1.1 Finding unpaired cross-sections by PCA……………………………………………..8
3.1.2 Extracting a cross-sectional image from a 3-dimensinonal model……….……..…..11
3.1.3 Approximating the curves by NURBS…………..………………………………….12
3.1.4 Extracting vector data from the curve…………..…………………………………..17
3.1.5 Principal Component Analysis………………………………………………………18
3.1.6 Hierarchical Cluster Analysis……………………………………………………….19
3.2 Experiment……………………………………………………………………………………….22
3.2.1 Height from Horizontal Line………………………………………………………..22
3.2.2 Curvature……………………………………………………………………………25
Chapter 4 3-Dimensional Analysis…………………………………………………….…………..31
4.1 The concept of the spherical attribute image (SAI)……………………………………………...31
4.2 Related work……………………………………………………………………………………..36
4.3 The SAI Algorithm……………….……………………………………………………………...36
4.3.1 Simplex Angle……………………………………………………………………….36
4.3.2 Geodesic Dome: generating the initial mesh………………………………………..38
4.3.3 Deformable Surface…………………………………………………………………40
4.3.4 From Shape to Attribute: Forward Mapping………………………………………..44
4.3.5 From Attribute to Shape: Inverse Mapping…………………………………………45
4.4 Matching of two SAI images…………..………………………………………………………...46
4.5 Shape interpolation by SAI images……………………………………………………………...47
4.6 Problems of the conventional SAI method………………………………………………………48
ii
4.6.1 Asymmetry of the lines in the mesh…………………………………………………48
4.6.2 Computational Costs………………………………………………………………...48
4.7 Proposed Methods………………………………………………………………………………..49
4.7.1 Spring Force…………………………………………………………………………49
4.7.2 Fast Search for Closest Point………………………………………………………..50
4.8 Experiment……………………………………………………………………………………….51
4.8.1 Generation of SAI images from the models of the fowl’s skulls………...………….51
4.8.2 Comparison of SAI images………………………………………………………….54
Chapter 5 Conclusion and Future Work.…………………………………………………………57
References…………………………………………………………………………………………...58
1
Chapter 1
Introduction
Japan is inhabited by the unique native fowls called Nihon-Kei inhabits. Among
these, 17 breeds, such as the Syamo (See Fig. 1.1. Syamo means a fighting fowl
in Japanese.) and Gifu-Jidori (Jidori means a domestic fowl in Japanese.), have
been designated protected breeds in Japan. Although these fowls originated in
Southeast Asia and China, they were imported to our country during the Yayoi
period, which took place hundreds of years B.C. The breeds were subsequently
improved for various purposes of use, including satisfying the Japanese fondness
for meat and cockfights. As a result, many characteristic breeds were produced.
Hereditarily, these Japanese native fowls are important breeds. Recognizing their
importance made some amateur lovers preserve these fowls. Later on, the neces-
sity for their preservation by public means has been recognized. Therefore, vari-
ous investigations and research on their origin and on their phylogenetic rela-
tionships have been performed.
Figure 1.1: A Syamo
2
Recently, analysis of relationships by blood according to ancient documents
along with analysis of forms, blood types, or DNA have been performed by Oana
et al. [18], Hayashi et al. [20], Nishida et al. [21], Okada et al. [22], Hashiguchi
et al. [19], Tanabe et al.[24], and Takahashi [23]. The research can be classified
into three categories: (1) analyzing relationships by blood directly according to
ancient documents etc., (2) analyzing fowls' genes using their blood, e.g., DNA,
etc., and (3) analyzing fowls' shapes using the morphological analysis methods.
Among morphological analysis methods, i.e., analyzing interspecies differences
in their body forms, osteometrical methods are common. These methods analyze
the difference in a shape by choosing characteristic parts among skeletal speci-
mens. Concretely speaking, they measure distances for the specifics points pre-
cisely and compare the corresponding distances in different breeds by the multi-
variate analysis such as the principal component analysis (PCA).
In these methods, only distances of the specific points of the specimens can be
obtained, but it is difficult to analyze form of the specimens totally and to distin-
guish individual differences such as the size of each individual, and the interspe-
cies morphological differences. In this research, using computer vision tech-
niques, we measured skulls of Japanese native fowls with a laser scanner, then
generated precise 3-dimensional geometric models of those skulls, and finally
quantitatively analyzed their shapes using a computer.
The goal of this research is to estimate phylogenetic relationships among Japa-
nese native fowls. There are two approaches to analyze their 3-dimensional geo-
metric models:
(1) 2-dimensional analysis of the specific cross-sectional image extracted from
the models
(2) direct 3-dimensional analysis of 3-dimensional models
In this thesis, using these two approaches, we analyzed the shapes of the skeletal
specimens of the fowls.
The thesis is organized as follows: In Chapter 2, we describe how we measured
the real objects with a laser scanner and generated their precise 3-dimensional
3
geometric models.
In Chapter 3, we describe how we found the specific cross-section by PCA, ap-
proximating the 2-dimensional shape of cross-sections by non-uniform rational
B-spline (NURBS) which is one of the parametric curves, and analyzing the
curves by using PCA and hierarchical cluster analysis (HCA). Then, we tell how
we carried out the experiments of analyzing the models of the fowls, and discuss
the results.
In Chapter 4, we first describe the concept and algorithm of the spherical attrib-
ute image (SAI) that is one of the methods for mapping geometrical attributes of
3-dimensional shapes onto spheres. Next, we describe the technique of generat-
ing imaginary 3-dimensional shapes by interpolating between SAIs calculated
from different 3-dimensional models. Then, we discuss the problems posed by
the SAI method, and propose a new method for solving those problems. Then, we
describe how we carried out the experiments of analyzing the models of fowls by
the SAI method, and discuss the results. Finally, in Chapter 5, we conclude this
thesis.
4
Chapter 2
Generation of 3-dimensional Geometric Models In this chapter, we describe how we measured the real objects such as the skeletal specimens of fowls' skulls with a laser scanner and generated their precise 3-dimensional geometric models. First, we describe the technique of obtaining a picture, called a range image, from which depth information was mapped onto each pixel. Next, we describe the method of alignment of the range images from different positions into a single coordinate system. Finally, we describe the method for generating a single 3-dimensional model by merging range images which have already been aligned. 2.1 Measurement A laser scanner is a device for obtaining a range image by irradiating a real object with a laser beam. This measurement gives us the accurate length from viewpoint to the surface of the real object. The lengths are mapped onto each pixel of the range image as depth information. In this research, we used a laser scanner called VIVID910 (See Fig. 2.1), a product of KONICA MINOLTA, Inc.
Figure 2.1: VIVID910
Since the position of a laser scanner must be fixed when measuring a real object, the shape of the object's back and the portion that is in shade cannot be acquired
5
in one measurement. Thus, it is necessary either to rotate the object or to change scanner's position for measuring from sufficient multiple positions to enable us to measure the entire object surfaces. In this research, we measured the specimens of skulls of fowls. These samples included the following breeds of native Japanese fowl: five Gifu-Jidoris, five Sat-suma-Doris (Dori or Tori means a fowl in Japanese.), five Syamos, and five Ko-Syamos (Ko- means small in Japanese.). Also included were 5 European White Leghorns. The breeds are the most standard and popular egg-breeds and we used them as benchmarks in the analysis of the phylogenetic relations. All in all, we measured the skulls of 24 individual fowls belonging to 5 different species. Considering complexity of a shape of each individual fowl (Figure 2.2 shows one of the specimens.), measurements from 11 different viewpoints are enough to model the fowl. Figure 2.3 shows examples of measurements. They are range images of the skull of a Gifu-Jidori, one of Japanese native fowls. The images were obtained by the VIVID910 scanner; they are all of the same indi-vidual fowl and each of them was measured from a different viewpoint.
Figure 2.2: A skeletal specimen of a fowl’s skull
Figure 2.3: Range images of a Gifu-Jidori
6
2.2 Alignment The range images described in the previous section cannot be merged into a sin-gle 3-dimensional geometric model because they are placed in different camera coordinate systems that depend on the position of the laser scanner at the time of measurement. Until now, the method of alignment of such range images into a single world co-ordinate system involved the use of a software operation proposed by Besl and McKey [11] and Nishino et al. [16]. In our research, using this method, we aligned 11 range images per each individual into a single world coordinate sys-tem. Figure 2.4 shows the result of aligned 11 range images of a Gifu-Jidori. The indi-vidual is the same as the sample shown in the previous section. In this figure, the pixels measured from different positions are shown in different colors. This fig-ure shows 11 range images that have been correctly aligned into a single world coordinate system.
Figure 2.4: Aligned range images of a Gifu-Jidori
2.3 Merging The range images aligned into a single coordinate system in the previous section are a simple set of coordinates of 3-dimensional points without connectivity, face information, etc. In those range images, since the points that were measured from different viewpoints were measured repeatedly, the point density changes with places. Therefore, it is difficult to analyze the geometric shape of the range im-ages. To solve this problem, we considered merging range images already aligned and representing the geometric shape of the real object as a mesh model with face in-formation. Until now, Wheeler et al. [17] proposed a method of merging range
7
images and generating a mesh model using the Signed Distance Field. In our re-search, by using this method, we generated the mesh model of each individual. Figure 2.5 shows an example of merging. This is a merged mesh model of a Gifu-Jidori. In this thesis, the term "3-dimensional geometric model" means this merged mesh model.
Figure 2.5: Merged mesh model of a Gifu-Jidori
8
Chapter 3
2-Dimensional Analysis of Cross-Sectional Image In this section, we describe how we analyzed the specific 2-dimensional cross-sectional images extracted from 3-dimensional geometric models of the fowls. We first describe a method for finding the geometric models of the most charac-teristic cross-sections, i.e., we use the principal component analysis (PCA) method, which considers the models to be sets of 3-dimensional vectors and ex-tracts the cross-section images. Next, we describe a method for approximating the shape of the cross-sections by a non-uniform rational B-Spline (NURBS) that is one of the parametric curves; then we describe such two methods for extracting data vectors from the NURBS for multivariate analysis e.g., the PCA. Then, we describe a method for analyzing the data vectors extracted from the cross-sections of the geometric models by using PCA and hierarchical cluster analysis (HCA). Finally, we tell how we carried out the experiments of analyzing the 3-dimensional geometric models of the fowls' skulls, and discuss the experimen-tal results. 3.1 2-dimensional methods for analyzing 3-dimensional models 3.1.1 Finding unpaired cross-sections by PCA The 3-dimensional geometric models are generated by measuring the real objects with a laser scanner, and aligning and merging the measuring data; they consist of the sets of the coordinates of 3-dimensional points and the connectivity infor-mation whose three points compose a triangle. Prior to 2-dimensional analysis, we had to reduce 3-dimensional models to 2-dimensional. Our goal in this section was to find the most characteristic
cross-sections of the 3-dimensional models and to then extract the cross-sections as the objects of 2-dimensional analysis. Considering the set of coordinates of 3-dimensional points to be the set of the 3-dimensional vectors enables PCA to determine the cross-sections. The most
9
characteristic cross-sections determined in this way are usually the unpaired cross-sections. The procedure is as follows: Assume that a geometric model of a fowl's skull in-
cludes N points. Let the coordinate of point i be ( ))(3
)(2
)(1 ,, iii xxx ( )Ni ,,2,1 L= ,
and let the averages of each dimension in the model be ( )321 ,, mmm . Then the
covariance matrix S of coordinates in the model can be written as XXS T= , where
−−−−−−−−−
=
3)(
33)2(
33)1(
3
2)(
22)2(
22)1(
2
1)(
11)2(
11)1(
1
mxmxmxmxmxmxmxmxmx
N
N
N
L
L
L
X .
Next, we calculate the S ’s three eigenvalues 321 ,, λλλ and three eigenvectors
)3(3
)3(2
)3(1
)2(3
)2(2
)2(1
)1(3
)1(2
)1(1
,,www
www
www
. In PCA, these eigenvalues and eigenvectors are called prin-
cipal components (PCs); the biggest eigenvalue is the first PC, and so on. Here, these vectors define a new coordinate axis for the points in 3-dimensional space. The bigger eigenvalue is, the more important positional information with respect to the corresponding eigenvector is. Therefore, the cross-section +− )( 11
)3(1 mxw
0)()( 33)3(
322)3(
2 =−+− mxwmxw has the largest amount of information and is
most characteristic. Figure 3.1 shows the cross-section for the geometric model of a Hinai-Dori shown in Fig. 3.1. Note that Fig. 3.2 shows two cross-sections corresponding to two other eigenvectors. In this research, to calculate the eigenvalues and eigenvectors of the covariance matrices (it is also symmetric.), we used the Jacobi method, which is one of the classic methods for the calculation. The Jacobi method repeatedly applied the similarity transformation to the sym-metric matrix S in order to reduce magnitudes of non-diagonal elements of S until S converged to the diagonal matrix.
10
Figure 3.1: A Red line shows the most characteristics cross-section
Figure 3.2: Two other cross-sections
This similarity transformation can be written as ( ) ( )θθ ,,,, jiji T SGGS = )( ji ≠ .
Here, ),,( θjiG is an orthogonal transformation (called the Gives transformation) as shown in Equation (3.1).
−
=
1
1cossin
1
1sincos
1
1
),,(
OMM
LLL
MOM
LLL
MMO
θθ
θθ
θjiG (3.1)
11
The Givens transformation is a rotation of the ),( ji -plane in a multidimensional space. Since this is a similarity transformation, the square sums of all elements of S are constant. On the other hand, let FF , be the sum of non-diagonal elements
of S and S , that is, ∑≠
=jiijsF 2 and ∑
≠
=jiijsF 2 , where { } { }ijij ss == SS , . Here,
since ),(2222 jikssss jkikjkik ≠+=+ , only ( ji, )th element of S affects the change
of F . Also ( )222 ijij ssFF −=− . Therefore, we can compute the eigenvalues and
eigenvectors by selecting i and j , keeping FF − positive, and by repeating this transformation until S converges to the diagonal matrix. 3.1.2 Extracting a cross-sectional image from a 3-dimensional model In this section, we describe how we extracted an image (referred to as a cross-sectional image) on the cross-section specified in the previous section from the model. First, we determined whether the cross-section crossed each triangle mesh in the model. If it crosses a triangle mesh, the mesh includes a part of the cross-sectional image. Next, we calculated the intersection point between each line in the triangle and the cross-section, and generated a new section between the two points. Finally, we repeated this process on all meshes, connected all sec-tions and projected the image onto the cross-section. Figure 3.3 shows an exam-ple of cross-sectional images. We extracted the unpaired cross-sectional images from the 3-dimensinoal models of the fowls' skulls; in this research, we used only specific parts of this image. The specific parts included: the "curve" that consisted of the upper side of the beak, the glabella, the sinciput, and the upper side of the occipital. First, we defined the "horizontal line" of the image, i.e., the line that connected a point pair that had the longest distance in the image. Generally, the two points were the tip of the beak and the tip of the bump on the occipital. Then, we extracted the parts that were in the upper side of the horizontal line. In addition, we eliminated the points that were not included in the curve, i.e., the nostrils, the eyeholes, the measurement noise, etc. Figure 3.4 shows an example of curves obtained by the process mentioned above. In our 2-dimensional analysis, we used the curve as the object of analysis.
12
Figure 3.3: The unpaired cross-sectional image of the Hinai-Dori
Figure 3.4: The “curve” and the “horizontal line”
3.1.3 Approximating the curves by NURBS In the previous section, the curves extracted from the models were simple sets of coordinates on a 2-dimensional plane. Since their point density is not uniform and the points are not ordered, it was difficult to directly analyze the curves. Then, in this research, we considered approximating the set of coordinates on the 2-dimensional plane to a non-uniform rational B-Spline (NURBS), which is one of the parametric curves. The parametric curves are the curves whose coordinates ),( yx of the point on the curve can be a polynomial function )(tf with a parameter t . That is, the parametric curves are defined as the map from the parameter space to the 2-dimensional Euclidian space; they are also the locus of )(tf when the pa-rameter t is in its domain. The function )(tf characterizes the shape of the curve. Generally, the function is cubic or higher degree, and the higher the degree of the function is, the more
13
complex a curve defined by it is. There are various parametric curves to be able to represent the curve, including: the Bezier curve, the Hermite curve, the B-Spline, etc. The B-Spline is one of the parametric curves, and is widely used.
The polynomial function of a B-Spline is defined by 1+n control points ,0P
nPP ,,1 L and a vector ( )mxxx L10 ( )1, +<∀ ii xxi called the knot vector.
These two factors decide the shape of a B-Spline. Here, )(tf can be written as Equation (3.2), where nmk −= .
∑=
=n
iiki tNtf
0, )()( P (3.2)
1−k is equal to the degree of the B-Spline. )(, tN ki called the base function of
the B-Spline is recursively defined as Equation (3.3), where the degree of )(, tN ki
is 1−k and the domain of the parameter t is [ ]22 , +−− nnm xx .
11
1,1
1
1,,
)()()()()(
++
−++
−+
−
−
−+
−
−=
ii
kiki
iki
kiiki xx
tNtxxxtNxt
tN (3.3)
NURBS is an expansion of B-Spline. NURBS is non-uniform and rational, i.e., each control point has an arbitrary weight. Like a B-Spline, the function )(tf ′
characterizes the shape of NURBS. In addition to 1+n control points ,0P
nPP ,,1 L and the knot vector ( )mxxx L10 , )(tf ′ is also defined by the
weight vector ( )mwww L10 , which shows the weight of each control point.
Actually, the function )(tf ′ can be written as Equation (3.4).
∑
∑
=
==′n
oiiki
n
iiiki
wtN
wtNtf
)(
)()(
,
0, P
(3.4)
Here, the degree of NURBS 1−k , the base function of the B-Spline )(, tN ki and
the domain of the parameter t [ ]22 , +−− nnm xx are the same as in the case of the
14
B-Spline. Now, we consider approximating the shape of the curves using NURBS (notice that the curve is the simple set of 2-dimensional points). Generally, to approxi-mate the set of points to the parametric curve, it is necessary that the points be sorted into the order in alignment with the curve, but it is difficult. Therefore, in this research, we sorted the points approximately as follows: We projected the points onto the horizontal line computed in the previous section, and sorted the points on the line. After this, the points in the cross-section have been sorted so.
Here, let np be the number of points in the cross-section, and let iQ be coor-
dinates of each point )1,,1,0( −= npi L . If the degree of NURBS 1−k , the di-mension of the knot vector 1+m , and the number of the control points 1+n are
known in advance, we have to calculate parameter values it , the knot vector
( )mxxx L10 , and locations of control points nPPP ,,, 10 L in order to gener-
ate proper NURBS, i.e., iitf Q≅′ )( is satisfied for all i .
First, we calculated it . Generally, when calculating the parameter value it from
the set of the points, it is desirable that the intervals of it are proportional to the
lengths of the corresponding parts of NURBS. But, since it is difficult to calcu-late the length of the part of the curve, we used the length between the two points
approximately instead of the length of the curve, and calculated it approxi-
mately. Now, it are written as Equation (3.5), where ∑−
=−−=
1
01
np
iiid PP .
−=
−=−
+
=
= −−
)1(1
)2,,1(
)0(01
1
npi
npid
t
i
t iiii L
PP (3.5)
Next, we calculated the knot vector ( )mxxx L10 . The knot vector can be
determined by Equation (3.6), where ijd −=α and 1+−
=knnpd ( nj ,,2,1 L=
15
k− ).
iikj
mn
k
ttxxxxx
αα 11
10
)1(1
0
−−+
−
−=======
L
L
(3.6)
In this method, the knot vector has approximately the same distributions as the
parameter values it .
Then, we made simultaneous equations in order to determine the locations of the
control points nPPP ,,, 10 L . Among those, the edges of control points nPP ,0 had
already determined by the definition of knot vector, i.e., 0QP0 = and 1−= npn QP .
Next, we calculated the other control points 121 ,,, −nPPP L by minimizing the
square sum of a distance id that is a distance between the point iQ and the
point )( itf ′ , which is the point on the curve when parameter t is it . The
square sum D can be written as Equation (3.7)
∑ ∑
∑
∑
−
= =
−
=
−
=
−=
′−=
=
2
1
2
0,
2
1
2
2
1
2
)(
)(
np
i
n
jiikji
np
iii
np
ii
tN
tf
dD
PQ
Q (3.7)
Here, let iR be as:
)2,,1(
)()()()(
1,10,0
,10,0
−=
−−=
−−=
−−
−
npi
tNtNtNtN
npikniki
niknikii
L
QQQPPQR
Then D can be represented by using iR as Equation (3.8).
16
∑ ∑∑
∑ ∑
∑
−
=
−
=
=
=
−
=
−
=
−
=
+−=
−=
−−−−=
2
1
21
1,
1
1,
2
1
21
1,
2
1
2,1,10,0
)()(2
)(
)()()(
np
i
n
jjikj
n
jjiikjii
np
i
n
jjikji
np
iniknikiki
tNtN
tN
tNtNtND
PPRRR
PR
PPPP L
(3.8)
Then, we partial differentiated in order to minimize D . That is, we need to solve Equation (3.9).
)1,,1(0)()(2)(22
1
1
1,,, −==
+−=∂∂ ∑ ∑
−
=
−
=
nltNtNtND np
i
n
jjikjikliikl
l
LPRP
(3.9)
By solving the equation, we obtain Equation (3.10)
∑∑ ∑
∑∑∑−
=
−
=
−
=
−
=
−
=
−
=
=
∴
=+−
2
1,
2
1
2
1,,
2
1
1
1,,
2
1,
)()()(
0)()()(
np
iiikl
n
jj
np
iikjikl
np
i
n
jiikjikl
np
iiikl
tNtNtN
tNtNtN
RP
PR (3.10)
The equation can be simply written as RPNN =)( T , where
=
−−−
−
)()(
)()(
2,12,1
1,11,1
npknnpk
knk
tNtN
tNtN
L
M
L
N ,
=
−1
1
nP
PP M , and
++
++=
−−−−
−−
22,111,1
22,111,1
)()(
)()(
npnpknkn
npnpkk
tNtN
tNtN
RR
RRR
L
M
L
.
Finally, we can calculate the locations of control points P by solving
RNNP 1)( −= T .
Figure 3.5 shows an example of NURBS obtained by the process mentioned above. By representation of the simple set of the coordinates as NURBS, we can easily extract various vector data that characterize the shape of the curve. In this research, we approximated all the objects to NURBS with 50 control points and fifth degree. If we once have determined )(tf ′ , we can calculate the coordinates
17
of arbitrary points on the NURBS, by calculating )(tf ′ with changing continu-ously the parameter t . In the following experiments, we used the coordinates that were calculated in that manner.
Figure 3.5: The shape of the NURBS from the Gifu-Jidori
3.1.4 Extracting vector data from the curve In the previous section, we represented the shape of the curve on the specific cross-section as NURBS, which is one of the parametric curves. Next, we ex-tracted the features of the shape of the curve as vector data, in order to analyze the shape by multivariate analysis. In this research, we used two methods of extracting the vectors; the methods are as follows: (1) Heights from the horizontal line We first divided the "horizontal line" described in Section 3.1.2 into 100 parts. Next, we calculated each height from the mid-point on the divided horizontal line to the mid-point on the curve. Then, we considered these 100 heights to be a
100-dimensional vector ( )10021 xxx L=X . Figure 3.6 visually shows the
process.
Figure 3.6: The heights from the horizontal line
18
(2) Curvature We first divided the NURBS into 110 parts. Next, we calculated the “curvatures” at the 100 points that are at the center of the 110 points. Then, let
)105,,3,4( L−−=iip be the points on the divided curves. We defined the cur-
vature )100,2,1( L=xxi as Equation (3.11), where the sign is plus when the
line turns clockwise, and the sign is minus when the line turns counterclockwise (See Fig. 3.7).
55
55arccos+−
+− ⋅±=
iiii
iiiiix
pppp
pppp (3.11)
Figure 3.7: Definition of the curvature 3.1.5 Principal Component Analysis In this section, we describe how we analyzed the two different vectors described in the previous section. The objects using the analysis were the following Japa-nese native fowls: five Gifu-Jidoris, five Syamos, five Satsuma-Doris, and four Ko-Syamos; also included in the analysis were five White Leghorns, a popular egg breeds of European fowls. Totally the objects included five breeds, 24 indi-viduals. Using the methods described in the previous section, we first extracted 100 di-mensional vectors from each fowl. Next, we defined the data matrix X , which is
a 24100× matrix and its ),( ji th element ijx is the value of the dimension j
of the individual i ’s data vector. Then, we analyzed the matrix using the Princi-pal Component Analysis (PCA). In this PCA analysis, we used the Jacobi method in order to calculate eigenvalues
19
and eigenvectors. The Jacobi method has already been described in Section 3.1.1. The eigenvalues correspond to the principal components (PC) of the data matrix X . The magnitude of each eigenvalue is proportional to the contribution of the corresponding PC for clustering the individuals. The ratio of the magnitude of an eigenvalue in the sum of the magnitude of all eigenvalues is named "Proportion," and the sum of the proportion from the first PC is named the "Cumulative Pro-portion." These values suggest how many PCs have to be considered in order to cluster the individuals. Then, the eigenvectors define the weights of the value of each dimension of data vector. The weights correspond to the influence of the dimension values, that is, the dot product of a data vector and an eigenvector is the location of the individ-ual in the PC; the dot product is named "Score." This score corresponds to the coordinate in a multi-dimensional space when we consider that the PCA is the rotation of the coordinate axis. By visualizing the values of each dimension of the eigenvector, we can show the specific parts that have significant differences in shape. 3.1.6 Hierarchical Cluster Analysis As a result of the PCA, if the proportion of the first PC is very large and the pro-portions of the other PCs are small, they can be disregarded; we can analyze the similarity of the shape by plotting the score of the first PC on the number line. Also, if the proportions of the first and second PCs are large enough to disregard the other PCs, we can analyze the similarity by plotting them on a scatter dia-gram. But if not, we have to perform a certain quantitative analysis in order to analyze the shape. Now, we assume that PCs from first to n th have a large enough proportion. Here, let n dimensional vector X′ be the vector of which each element is the PCA score weighted by proportion of the PC. Let the pre-PCA original data vector X
be ( )Tmxxx L21 )( mn ≤ , and let the eigenvector )(iW that corresponds to
i th PC be ( )Tim
ii www )()(2
)(1 L . Here, X′ can be written as Equation (3.12).
20
+++
++++++
=′
mnm
nn
mm
mm
xwxwxw
xwxwxwxwxwxw
)(2
)(21
)(1
)2(2
)2(21
)2(1
)1(2
)1(21
)1(1
L
M
L
L
X (3.12)
By considering X′ as a position vector in a n -dimensional space and plotting all individuals onto the space, we can quantitatively analyze their similarity. Here, let two position vectors that show the positions of two individuals βα , be
( ) ( )nn xxxXxxxX ββββαααα ′′′=′′′′=′ LL 2121 , , respectively. We defined
their similarity as their Euclidean distance in a n -dimensional space, which is
calculated by ( )∑=
′−′=n
iii xxd
1
2βααβ . The smaller this distance is, the more similar
the shape of βα , looks. In this research, we analyzed the distance by hierarchical cluster analysis (HCA) in order to estimate interbreed relationships. The cluster analysis, which is the method for dividing a set into some clusters, generally uses “within variance” and “between variance” as the criteria. Now, we assume that the data matrix X′ is divided into g clusters, and the
data matrix sX of the cluster s and the average vector sm can be written as
{ } { }jssij
ss mX == mX , ),,2,1( gs L= . Here, sX is a nNs × matrix, sm is a
sN -dimensional vector, sm ’s j th element is jsm that is the average of sX ’s
j th column, and sN is the number of cluster s . Let m be the average vector
of all objects and let N be the total number of all objects. Then, these two are
written as { } ∑=
==g
ssj NNm
1,m .
Now, the total variance 2js can be written as Equation (3.13).
( )∑∑= =
−=g
s
N
ijij
sj
s
mXN
s1 1
22 1 (3.13)
21
2js can be divided into “within variance” 2)(
jw s and “between variance” 2)(
jB s .
Using these two variances, Equation (3.13) can be written as Equation (3.14).
( ) ( )2)(2)(
222 11
jB
jW
sjj
ss
s ij
sij
sj
ss
mmNN
mXN
s
+=
−+−= ∑∑∑ (3.14)
The "within variance" shows the dispersions within each cluster, while the "be-tween variance" shows the dispersions between all clusters. However, the objects may be divided; since the total variance is constant, the division method that makes the "within variance" the minimum is the optimal method. In this research, we analyzed the objects by the single link method that is one of the hierarchical cluster analyses. The single link method assumes one object to be one cluster. First, the number of clusters is equal to the number of the objects N . Next, we combined a pair of clusters that had a minimum “distance” into one cluster. This operation made the number of clusters be 1−N . We repeated this operation until the all clusters were combined into one cluster. By recording this process, we can obtain the dendrogram of all objects. This method does not guarantee the within variance to be minimum at each stage of the operation, but it is generally agreed that this method will be a sufficient approximation of it. Then, we used the distance between the barycenters of the clusters as a "distance" of the clusters. In this method, the barycenter of the clus-ter is considered to be their typical point, and the distance of the barycenters is considered to be the distance of the clusters. In this research, we used the Euclidean distance in a multidimensional space as the distance.
Let the position iX′ of an individual i in a cluster gC be
( )gingigii xxx ′′′=′ L21X and the barycenter of the cluster gC be
( )gngg xxx ′′′ L21 . Then, the distance ghD between the barycenters of the clus-
ter gC and hC can be calculated by Equation (3.15).
( )∑=
′−′=n
khkgkgh xxD
1
2 (3.15)
22
3.2 Experiment In this section, we analyze the models of the fowls' skulls by the method previ-ously described in this chapter. The objects of the analysis are two kinds of data vectors: heights of horizontal line and curvatures. The objects are five breeds, 24 individuals as follows: five Gifu-Jidoris, five Syamos, five Satsuma-Doris, four Ko-Syamos and five White Leghorns. Except for the European White Leghorns, which are the most popular egg breeds, they are all native Japanese fowls. 3.2.1 Height from Horizontal Line In this section, we used the heights form the horizontal line as the data vector. The results of the PCA method are shown in Table 3.1.
Table 3.1: Result of the PCA method using the heights First PC
Eigenvalue 0.983524
Proportion 73.3775
cumulative proportion 73.3775
2nd PC
Eigenvalue 0.259503
Proportion 19.3607
cumulative proportion 92.7382
3rd PC
Eigenvalue 0.0597588
Proportion 4.45841
cumulative proportion 97.1966
4th PC
Eigenvalue 0.0144133
Proportion 1.07533
cumulative proportion 98.272
5th PC
Eigenvalue 0.010312
Proportion 0.769348
cumulative proportion 99.0413
23
This table shows that the influence of the first PC is great, and the cumulative proportion of the second PC is more than 92%. The graphs as shown in Fig. 3.8 and 3.9 visualize the eigenvectors of the first and second PCs. The x-axis of the graphs is the dimension of the vector. The left side of the graphs shows a beak of a fowl, and the right side shows its occipital.
1st PC
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
1st PC
Figure 3.8: Visualization of the first eigenvector
2nd PC
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
2nd PC
Figure 3.9: Visualization of the second eigenvector
The first eigenvector shows that the region named "Stop" (See Fig. 3. 10) is most important region of the fowls' skulls. The "Stop" is the concave region be-tween the beak and the skull; this region is regarded to be an important region for recognizing the breed by the morphological studies.
24
Figure 3.10: The “Stop”
But, the superiority of the first eigenvalue and the fact that values of all the di-mensions of the first eigenvector are large suggest that the first PC mainly de-pends on the size of the individual. On the other hand, the second eigenvector shows that the bulge region in the occipital also characterizes the shape. This fact is well known by the morphological studies, too. In this experiment, since proportions of the third and the subsequent PCs are very small, i.e., can be disregarded, we can draw a scatter diagram in order to estimate their relationships qualitatively. Figure 3.11 shows the scatter diagram.
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
-0.6 -0.4 -0.2 0 0.2 0.4
1st PC
2nd
PC
gifu
syamo
satsuma
kosyamo
white leghorn
Figure 3.11: The scatter diagram
Stop
25
In this result, the dispersion within the same breeds is large, that is, the same breed is not divided into one group. And, interbreed relationships are ambiguous. The cause is that the PCA score is greatly dependent on the size of each individ-ual. 3.2.2 Curvature In this section, we used the curvatures at the points on the curve as the data vec-tors. The results of the PCA method are shown in Table 3.2. Note that the sixth PC and the subsequent PCs are omitted.
Table 3.2: Result of the PCA method using the curvatures 1st PC
Eigenvalue 14.5665
Proportion 35.8589
cumulative proportion 35.8589
2nd PC
eigenvalue 9.76095
proportion 24.0289
cumulative proportion 59.8878
3rd PC
eigenvalue 4.262
proportion 10.4919
cumulative proportion 70.3798
4th PC
eigenvalue 2.60758
proportion 6.41919
cumulative proportion 76.799
5th PC
eigenvalue 2.41146
proportion 5.93638
cumulative proportion 82.7353
Since a proportion of the first PC is small compared with analysis of the heights, more PCs are considered to have effective information. Moreover, since the cu-
26
mulative proportion of the first to the twentieth PC is 99.8%, if we take into con-sideration to the 20th PC, the amount of information will be sufficient. The graphs as shown in Fig. 3.12 and 3.13 visualize the eigenvectors of the first and second PCs. The x-axis of the graphs is the dimension of the vector. The left side of the graphs shows a beak of a fowl, and the right side shows its occipital.
1st PC
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
1st PC
Figure 3.12: Visualization of the first eigenvector
2nd PC
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
2nd PC
Figure 3.13: Visualization of the second eigenvector
These eigenvectors show that the "stop" is most important region for recognizing the shape. Compared with the eigenvectors in the analysis of the heights having large values at every dimension, this eigenvector has a sharp peak around the fif-tieth dimension. This peak corresponds to "stop". This eigenvector shows the in-
27
fluence of "stop" more clearly than the eigenvector of the analysis of the height that mainly depends on the size of the individual. Table 3.3 shows the results of PCA analysis. Note that the sixth and the subse-quent PCs are omitted.
Table 3.3: The results of 2-dimensional analysis PC score
gifu #1 -0.32769 -0.451798 -0.26573 0.305602 -0.21462
gifu #2 -0.26897 -0.243197 -0.13171 0.459859 -0.01483
gifu #3 -0.475068 -0.0301512 -0.45521 0.122168 0.48787
gifu #4 -1.04936 -0.879439 -0.04244 0.489559 0.061843
gifu #5 -0.547994 -0.344 -0.43393 0.516628 0.05932
syamo #1 0.602007 0.431786 -0.15289 0.042992 0.526117
syamo #2 0.83149 0.427241 0.139825 0.729537 -0.09477
syamo #3 0.54211 0.00860734 -0.73032 -0.13755 0.025435
syamo #4 0.717337 0.452244 -0.33382 0.170177 0.057046
syamo #5 -0.323577 -0.148849 -0.59422 -0.32671 -0.10091
satsuma #1 1.35094 0.491933 -0.04554 -0.19521 -0.31212
satsuma #2 0.744543 -0.261151 -0.49186 -0.3368 -0.20416
satsuma #3 1.44379 0.502823 1.01591 0.246917 0.164797
satsuma #4 -0.0848454 -0.471096 -0.13488 -0.5128 -0.3783
satsuma #5 1.04662 0.249627 -0.29446 -0.32843 0.543785
kosyamo #1 -0.90277 1.37731 0.135351 -0.2575 -0.0596
kosyamo #2 -0.948332 0.749251 0.065882 0.002422 -0.22071
kosyamo #3 -1.13792 1.20985 0.434034 -0.21923 -0.1748
kosyamo #4 -0.888496 0.590511 -0.02892 0.038977 -0.2576
wl #1 -0.625011 -0.645899 0.565781 -0.3591 0.859315
wl #2 -0.558553 -0.732875 0.448752 -0.14809 0.096596
wl #3 0.816106 -0.792646 0.409462 0.119906 -0.49482
wl #4 0.192028 -1.06005 0.338303 -0.47476 -0.24863
wl #5 -0.148383 -0.430033 0.582635 0.051433 -0.10628
average 6.66667E-08 -3.58333E-08 2.37E-07 1.79E-08 -3.7E-08
proportion 35.8589 24.0289 10.4919 6.41919 5.93638
28
normalized PC score
gifu #1 -11.75060294 -10.85620896 -2.78797 1.961717 -1.27409
gifu #2 -9.644968333 -5.843756393 -1.38186 2.951922 -0.08803
gifu #3 -17.03541591 -0.72450017 -4.776 0.78422 2.896182
gifu #4 -37.6288953 -21.13195179 -0.44522 3.142572 0.367121
gifu #5 -19.65046205 -8.2659416 -4.55279 3.316333 0.352145
syamo #1 21.58730881 10.37534262 -1.60415 0.275974 3.12323
syamo #2 29.81631676 10.26613126 1.46703 4.683037 -0.56256
syamo #3 19.43946828 0.206824912 -7.66247 -0.88298 0.150989
syamo #4 25.72291575 10.86692585 -3.50235 1.092398 0.338647
syamo #5 -11.60311529 -3.576677736 -6.23451 -2.09719 -0.59902
satsuma #1 48.44322237 11.82060886 -0.47785 -1.25311 -1.85283
satsuma #2 26.69849298 -6.275171264 -5.16052 -2.16201 -1.21197
satsuma #3 51.77272123 12.08228358 10.65883 1.585007 0.978298
satsuma #4 -3.042462714 -11.31991867 -1.41517 -3.29175 -2.24571
satsuma #5 37.53064192 5.99826222 -3.08948 -2.10824 3.228114
kosyamo #1 -32.37233915 33.09524426 1.420089 -1.65293 -0.35378
kosyamo #2 -34.00614235 18.00367735 0.691227 0.01555 -1.31019
kosyamo #3 -40.80455949 29.07136467 4.553841 -1.4073 -1.03769
kosyamo #4 -31.86048921 14.18932977 -0.30342 0.250203 -1.52919
wl #1 -22.41220695 -15.52024248 5.936118 -2.30514 5.10122
wl #2 -20.02909617 -17.61018009 4.708261 -0.95059 0.573429
wl #3 29.26466344 -19.04641147 4.296034 0.769699 -2.93743
wl #4 6.885912849 -25.47183545 3.549441 -3.04755 -1.47596
wl #5 -5.320851159 -10.33321995 6.112948 0.330156 -0.63091
average 2.39059E-06 -8.61036E-07 2.49E-06 1.15E-07 -2.2E-07
total variance 780.4368417 234.8270943 19.54843 4.477004 3.540884
sum of total variance 1044.947275
average PC scores in each breeds
gifu -19.14206891 -9.364471782 -2.78877 2.431353 0.450666
syamo 16.99257886 5.627709382 -3.50729 0.614248 0.490257
29
satsuma 32.28052316 2.461212946 0.103161 -1.44602 -0.22082
kosyamo -34.76088255 23.58990401 1.590435 -0.69862 -1.05772
wl -2.322315597 -17.59637789 4.920561 -1.04069 0.126068
between variance 556.0931309 183.383622 9.650927 2.052757 0.292316
sum of between variance 752.4984077
within variance
gifu 491.2878891 228.9501804 14.53245 4.493621 9.262205
syamo 1085.479571 185.6110988 53.06903 26.49167 9.365587
satsuma 1947.61477 458.9180075 151.9643 13.58222 21.07905
kosyamo 51.21308475 239.9746285 13.20605 2.823243 0.78196
wl 1808.653744 121.189421 4.768217 10.79117 37.47684
within variance 224.3437108 51.44347234 9.897502 2.424247 3.248568
sum of within variance 14.62244336
between + within 780.4368417 234.8270943 19.54843 4.477004 3.540884
sum of "between + within" 1044.947275
Next, the dendrogram of the relationships of the five breeds determined by the analysis is shown in Fig. 3.14. These results show that, in the analysis of the cur-vatures, the difference within a breed is small; the same-breed individuals are correctly classified into one cluster. The shapes of the Gifu-Jidoris and the White Leghorns were most similar; secondly the Syamos, and Ko-Syamos were similar. But, the differences within the Satsuma-Doris were too large to classify them into one cluster.
30
Figure 3.14: Dendrogram of the relationships of the breeds
kosyamo #2
kosyamo #4
kosyamo #1
kosyamo #3
gifu #1
gifu #5
gifu #2
gifu #3
gifu #4
wl #2
wl #5
wl #1
wl #3
wl #4
syamo #3
syamo #4
syamo #2
syamo #1
syamo #5
satsuma #2
satsuma #5
satsuma #4
satsuma #1
satsuma #3
31
Chapter 4
3-Dimensional Analysis In this chapter, we describe how we directly analyzed the 3-dimensional models that were generated by the methods described in Chapter 2. First, we describe the concept of the spherical attribute image (SAI), one of the methods for mapping the geometrical attributes of the 3-dimensional mesh sur-face onto a sphere; we then describe the algorithm for generating a SAI. Next, we point out the problems of the conventional SAI, propose a new method for solving them, and show the validity of the method. Then, we analyze and compare the 3-dimensional models of the fowls' skulls by using the new SAI method, and discuss the result. Finally, we describe how we generated the synthetic models of fowls' skulls by morphing SAIs, and estimate clearly phylogenetic relationships of the fowls by analyzing them. 4.1 The concept of the Spherical Attribute Image (SAI) The spherical attribute image (SAI) is a method for mapping a geometrical at-tributes of 3-dimensional mesh model onto a sphere in order to represent the en-tire shape of a 3-dimensional object. The attributes are defined on each point of the mesh and the SAIs are constant against rotation and scaling of the object. With the SAI, by using the "simplex angles" as the geometric attributes mapping onto a sphere, it is possible to reconstruct the original 3-dimensional shape from the SAI only. Moreover, since the SAIs are constant against rotation and scaling, it is possible to compare the difference of object shapes by comparison of the SAIs whether the original object had been rotated. By linear interpolation of the attributes on the different SAIs, the synthetic 3-dimensional shape can be reconstructed. This synthetic shape is the neutral shape of the multiple origins. (1) Discrete representation of a 2-dimensional curve Before representation of a 3-dimensional shsape, it is helpful to understand how
32
one can represent a 2-dimensional shape. Now, there is a set of 2-dimensional points as the object representation. A natural way of representing this shape is the approximation of their boundaries to the list of line segments as shown in Fig. 4.1.
Figure 4.1: Approximation of 2-dimensional shape by the line segments Here, the line segments have equal lengths. The connected points of the lines are called "nodes;" one can discretely map the geometrical attributes of all nodes onto a circle while keeping the sequence of the nodes (See Fig. 4.2).
Figure 4.2: Mapping the Attribute onto a circle Since they are mapped onto a circle, these attributes are constant against rotation and scaling. If the line segments have sufficient density, the representation can make the same attribute-mapped circle as the object that had been rotated. Scal-ing is the same. In this representation, each geometrical attribute is the "turning angle" that is one of the discrete curvatures. The turning angles are defined at each node of the line segments, and they are exterior angles of the two lines connected to the node (See Fig. 4.3). Under this definition, a turning angle α is positive when the curve at the node is convex; α is negative when the curve is concave, and α is zero when the curve is flat.
33
Figure 4.3: Definition of turning angle In addition, since these attributes are defined by only the relative position of the node and the two neighbor nodes, they are constant against rotation. And since these attributes are angles, they are constant against scaling. These attributes are suitable for this method of shape representation. The algorithm of this method of comparing multiple 2-dimensional shapes is shown below: (1) Approximate a boundary of a 2-dimensional shape to a list of line segments (2) Calculate the turning angles as geometrical attributes at all nodes (3) Map the attributes onto a circle, keeping their sequence (4) Turn the circles to minimize the difference of the attributes that are mapped at
the same place on the circle (5) Compare the attributes at each place on the circle Then, the circle on which the turning angles are mapped can reconstruct the original 2-dimensional shape. If the positions of two neighbor nodes 21, PP are known, a node P has to exist on the perpendicular bisector of the line connected between the two neighbors. And, from the definition of the turning angle, the ab-solute value of the turning angle is proportional to the distance between P and the line connecting the two neighbors. That is, the position of a node P can be determined by the turning angle at the node P . From these facts, if it has the same number of nodes, arbitrary line segments can transform to the original shape by minimizing the "error" iteratively at all nodes simultaneously; here, the error is the distance between a position of the node and the "true" position of the node that is determined by the positions of the two neighbor nodes and the turning angle. In this method, generation of the attribute-mapped circle from the shape is called
α
34
"forward mapping," and reconstruction of the original shape from the attrib-ute-mapped circle is called "inverse mapping." (3) Expansion to 3-dimensinoal spaces The SAI is the direct expansion to 3-dimension of the method of representing 2-dimensional shapes described in the previous section. The "SAI image" is a sphere on whose surface the geometric attributes of the original 3-dimensional shapes are mapped. In the SAI method, first, the "semi-regular" mesh model approximates the boundary of 3-dimensional shape. The semi-regular mesh is the mesh that is not self-interacting, and all the vertices have exactly three neighbor vertices. The mesh is a direct expansion of the line segments; all their nodes have exactly two neighbor nodes. In this method, the mesh has to satisfy a constraint called "local regularity." This constraint is the expansion to 3-dimension of the constraint that all line segments have equal length. We will describe local regularity in a 3-dimension case in Sec-tion 4.3.3. Next, the "simplex angle," one of the geometrical attributes, is calculated at each node of the mesh. The attribute is an expansion to 3-dimension of the turning an-gle. Like the turning angle, the simplex angle can be calculated only by relative positions of the node and its three neighbors; its sign shows whether the surface is convex or concave, and its absolute value is proportional to the distance be-tween the node and the triangle that is made by the three neighbors. We will de-scribe the simplex angle in Section 4.3.1. In the SAI method, as in the case of 2-dimension, "forward mapping" is the gen-eration of the attribute-mapped sphere from the 3-dimensional shape, and "in-verse mapping" is the reconstruction of the original 3-dimensional shape from the attribute-mapped sphere. Figure 4.4 shows examples of forward mapping and inverse mapping. In the SAI image, a red (blue) pixel shows that the simplex angle is positive (negative) at its position and simultaneously shows that the surface is convex (concave) at its po-sition.
36
4.2 Related work To generate SAI images, some methods of mapping geometrical attributes de-fined at each point of 3-dimensional models onto spheres have been proposed, including: Gauss Mapping [1], Extended Gaussian Image (EGI) [2] and Complex Extended Gaussian Image (CEGI) [3], etc. But, since the fowls' skulls that were analyzed in this research were concave objects, and they were not topologically equal to spheres (include some holes), these methods could be applied. Therefore, we used the Deformable Surface Method. This method deforms a spherical original mesh to fit to the object's surface by iterative calculation. With regard to the deformable surface method, some related research was carried out: Irregular meshes [4], finite element models [4], balloon models [5] and the ap-plications of medical imaging [6]. 4.3 The SAI Algorithm In this section, we describe the details of the SAI algorithm. We first describe the definition of the simplex angle. Next, we describe how to make the geodesic dome be the initial spherical mesh for deforming, and how to deform the mesh to the object's surface. Then, we describe how to map the simplex angle calculated at each mesh node on the sphere using the property of deformable surface that the mesh nodes before and after deforming have one-to-one correspondences. 4.3.1 Simplex Angle The simplex angle is a geometrical attribute that calculated at each node of a semi-regular mesh by the relative positions of the node and its three neighbor nodes.
Let P be the position of a node, and 321 ,, PPP be the positions of three
neighbor nodes of P . And let Ο be the center of the circumscribed sphere of
the tetrahedron 321 PPPP , C be the center of the circumcircle of the triangle
321 PPP . And let Z be a line connected O and C , and Π be a plane that in-
cludes Z and P (See Fig. 4.5). Moreover, consider a section plane of the cir-
cumscribed sphere of the tetrahedron 321 PPPP by the plane Π .
37
Figure 4.5: The concept of simplex angle
On the section plane, the circumcircle of the triangle 321 PPP becomes the sub-
tense of P . The exterior angle φ of P on the plane is the simplex angle at the mesh node P (See Fig. 4.6).
φP
Z
O
C
rdφ
↑n→
Figure 4.6: The definition of simplex angle
From these definitions, the simplex angle φ can be written as Equation (4.1),
where r is the radius of the circumcircle of the triangle 321 PPP , R is the ra-
38
dius of the circumscribed sphere of the tetrahedron 321 PPPP , and N is the nor-
mal vector of the triangle 321 PPP .
( )( )NPP
NOCOC
⋅=
⋅=
1sin
cos
signRr
signR
φ
φ (4.1)
Note that the domain of φ is [ ]ππ ,− . Under this definition of the simplex angle, like the turning angle, because this geometrical attribute is an angle that is calculated only by the relative positions of the node and its neighbors, the simplex angle is constant against rotation and scaling. Moreover, when the simplex angle is positive, the node P is over the neighbor
triangle 321 PPP . When it is negative, P is under the triangle. And when it is
zero, P is on the triangle. That is, when the simplex angle is posi-tive/negative/zero, the surface at the node is convex/concave/flat. And, from the definition, the absolute value of the simplex angle is proportional to the distance
between the node P and the neighbor triangle 321 PPP .
These features of the simplex angle show that this angle is a natural expansion to 3-dimension of the turning angle. The SAI image is generated by calculating this simplex angle at each node of the shape-approximated mesh and by mapping them onto the sphere. 4.3.2 Geodesic Dome: generating the initial mesh The initial mesh of the deformable surface has to be a sphere-circumscribed semi-regular mesh. In this research, we first generated a sphere-circumscribed regular mesh named the "geodesic dome," and dualized it onto the semi-regular mesh. Generally, geodesic domes are sphere-circumscribed polyhedrons made of trian-gles. There are thousands of such polyhedrons, but for generating a more sym-metrical mesh, it is desirable that: (1) the ratio of the lengths of three sides of the triangles is nearly equal to 1, that is, each triangle of the polyhedron is nearly equal to a regular triangle (2) the triangles are as similar in shape as possible. In
39
the sphere-circumscribed polyhedrons, it is well-known that there are three polyhedrons that completely fulfill the conditions, that is, there are three regular solids as shown in Fig. 4.7 made of only regular triangles: the regular tetrahedron, the regular octahedron, and the regular dodecahedron.
Figure 4.7: Regular tetrahedron, octahedron and dodecahedron
Since, in these three polyhedrons, the regular dodecahedron is most similar to the sphere, dividing its facial triangles can generate a more sphere-similar geodesic dome. First, bisecting three edges of each triangle and generating two new lines con-necting between the new point and the original points makes the polyhedron more sphere-similar. Therefore each original triangle is divided into four trian-gles. Because this new polyhedron is not circumscribed to the sphere, the newly generated points (the medians of three edges) have to project from the center of the sphere onto the surface of the sphere. As a result, this operation divides each triangle into one regular triangle and three isosceles triangles. By repeating this operation at all vertices of the polyhedron, a more sphere-similar geodesic dome can be generated. Figure 4.8 shows the aforementioned process. In the figure, a red triangle is a regular triangle.
Figure 4.8: The processes of dividing the geodesic dome
Iterating this operation can generate the geodesic dome with arbitrary density. After iterating n -times, the generated geodesic dome has n420× triangles. In
40
this research, we used the geodesic dome with 4=n shown in Fig. 4.9 as the initial mesh for generating SAI images. This geodesic dome is a polyhedron that contains 5120 triangles.
Figure 4.9: Geodesic dome that contains 5120 triangles Next, calculating the dual mesh of the geodesic dome (notice that it is a regular mesh) can generate a semi-regular and sphere-similar mesh. In this case, the generated semi-regular mesh has 5120 vertices; all the mesh nodes have exactly three neighbor nodes, and this mesh is made up of many hexagons and very few pentagons (See Fig. 4.10).
Figure 4.10: Generated dual mesh and its macrograph
4.3.3 Deformable Surface In this section, we describe how to deform the initial mesh that is the spherical semi-regular mesh described in the previous section to the 3-dimensional objec-
41
tive model. The term "model" means the object whose shape is approximated. This method assumes that the barycenter of the initial mesh is equal to the bary-center of the objective model and that the initial mesh is large enough to fully cover the model. Now, consider the two imaginary "forces;" they can be calculated by the relative positions between the model and each node of the mesh, and by the "smoothness" of the mesh itself. And these forces independently move each node. By itera-tively solving the equation of motion about these forces, the mesh is deformed to the model. In this method, it is important that the mesh's geometrical structure is retained during the deformation, that is, that the three neighbors of each node are fixed. Therefore, by using a deformable surface, each node of the initial mesh and each node of the deformed mesh have one-to-one correspondences. These correspon-dences make it possible that the attributes defined at each node of the deformed mesh are mapped onto the initial spherical mesh. Moreover, by keeping a constraint named "local regularity" during deformation, the original shape can be reconstructed from the SAI image that is an attrib-ute-mapped sphere. These features are products of the force defined by the smoothness of the mesh itself, that is, the force is the "internal force" described in the following section. Local regularity is the expansion to 3-dimension of the constraint that all line segments have equal lengths. Specifically, at each node of the mesh, this con-straint is that the foot perpendicular from the node to the neighbor triangle is equal to the barycenter of the neighbor triangle. That is, as shown in Fig. 4.11, let
P be the node of the mesh, and 321 ,, PPP be the three neighbor nodes. And
consider the tetrahedron 321 PPPP . Local regularity is the constraint that the foot
Q of perpendicular from P to triangle 321 PPP is equal to the barycenter G
of triangle 321 PPP .
The algorithm of deformable surface can be written as follows: (1) Calculate the “forces” at each node of the mesh (2) Move each node by the equation of motion about the forces (3) If the sum of distances between each node and the model is smaller than the
42
threshold, this algorithm is finished (4) Otherwise, go back to Step (1). Step (1) and (2) are repeated until the sum of
distances between each node and the model is smaller than the threshold.
Figure 4.11: Definition of local regularity
The equation of motion at the i th node iP of the mesh can be written as Equa-
tion (4.2).
IntExtii
dtdk
dtd
FFPP
βα +=+2
2
(4.2)
Here, ExtF is the force named “external force” that is defined by the distance
between the node and the model, and this force makes the node closer to the
model and deforms the shape of the mesh. IntF is the force named “internal
force” that is defined by the internal structure of the mesh itself, and this force makes the mesh keep the structure of the semi-regular mesh, as well as local regularity.
βα , are the coefficients for adjusting the influences of two forces. If α is too large, the shape of the mesh becomes more similar to the model but the mesh cannot keep the structure of a semi-regular mesh and local regularity. On the other hand, if β is too large, the shape of the mesh becomes the shape that completely retains the structure and the constraint, that is, the shape becomes a sphere. There is a trade-off between accuracy of shape approximation and regu-larity of the mesh.
P
1P
2P
3PQG
43
In the following section, we concretely describe the external and internal forces. (1) External Force (Data Force) The external force is defined by the relative positions of the node and the model, and this force deforms the mesh to the shape of the model. This force considers the model to be a simple set of the 3-dimensional points, and does not use its connectivity.
Consider the i th node iP of the mesh. If the “closest” point )(iclM of iP in
the model is given, the external force ExtF at iP is defined as Equation (4.3),
where iN is the normal vector of the neighbor triangle of the node iP .
( ) iiicli
icli
Ext DG NNMP
MPF ⋅
= )(
)( (4.3)
Here, the term “closest” means that the Euclidean distance between the two points is closest. )(xG is 1 when the parameter x is 1 or less, and decrease rapidly when x is over 1. D is the threshold for judging the correspondence between the node and the closest point in the model. If the distance between the two points is larger thanD , the correspondence may be incorrect; this value has to be adjusted according to the scale. The suitable choice of the )(xG and D reduces the situation that the deforming falls into local minima. In this research, we used the function shown in Equation (4.4) as )(xG .
>
≤≤= )1(1
)10(1)(
2 xx
xxG (4.4)
Also, projecting the external force on the direction of iN reduces the negative
effect of the external force on the local regularity. Generally, if the model is the very dense model such as the one scanned with a
high-performance laser scanner, it is difficult to search the closest point )(iclM in
the model. Let N be total number of the points in the model and let n be number of the nodes of the mesh, the naïve all-search algorithm costs )(NO at each node in computational time.
44
But, in the deformable surface method, only the mesh is changed and the model is fixed during deformation. Therefore, by using k-d tree data structure for keep-ing the positions of the points, the computational cost of searching can be re-duced. Accordingly, calculation of external force at each node costs )(logNO in computational time, the total computational cost at each stage of the iteration is
)log( NnO (2) Internal Force (Smoothness Force) The internal force is defined by the relative positions between the node and the three neighbor nodes; the force makes the mesh keep its local regularity. In Sec-tion 4.1, we described how the circle, the point on which is mapped turning an-gles, can reconstruct the original shape. We have described that the SAI is the expansion to 3-dimension of this method, and that local regularity is the expansion of the constraint that all line segments have equal lengths. In the case of 2-dimension, it is because the node has to be bound on a specific line that the constraint is required.
In a SAI, let P be a node of the mesh, let 321 ,, PPP be the three neighbors of
P , and Q be the foot of perpendicular from P to the triangle 321 PPP . The
simplex angle mapped on P can determine the distance between P and the
triangle 321 PPP . That is, if the simplex angle and the position of Q are given,
the 3-dimensional position of P can be calculated, because P is bound on the
perpendicular of 321 PPP that passes Q .
Now, since Q is a point on the triangle 321 PPP , the position of Q can be rep-
resented as 33211 PPPQ εεε ++= , where 1321 =++ εεε . 321 ,, εεε are called
“metric parameters.” The position of P can be determined by the positions of its
three neighbors 321 ,, PPP , the simplex angle, and metric parameters. As the result,
“local regularity” brings Q close to the neighbor triangle’s barycenter
3PPPG31
31
31
21 ++= , that is, it brings metric parameters close to 31
321 === εεε .
45
Finally, the internal force IntF at the node P can be defined as Equation (4.5).
( )QGF −=Int (4.5)
4.3.4 From Shape to Attribute: Forward Mapping In this section, we concretely describe the method to calculate the equations of motion. From the discussions in the previous sections, the equation of motion of
the i th node iP of the mesh can be written as Equation (4.6).
IntExtii
dtdk
dtd
FFPP
βα +=+2
2
(4.6)
Since this equation is a continuous differential equation, it is difficult to solve it analytically. Therefore, we solved it numerically by using the Euler method that is one of the discretized numerical methods. The Euler method solves the equa-
tion by iteratively calculating Equation (4.7), where )(tiP , )(
,tiExtF , and )(
,tiIntF are the
i th node, its external force, and its internal force at the t th iteration, respec-tively.
( ) )(,
)(,
)2()1()1()( )1( tiInt
tiExt
ti
ti
ti
ti k FFPPPP ++−−+= −−− (4.7)
By iteration of this calculation until the error function E is smaller than the threshold, this method can generate a mesh that is fitted to the model. E is the
sum of all distances between each node iP and its closest point )(iclM , that is,
can be written as Equation (4.8).
∑=i
icliE )(MP (4.8)
4.3.5 From Attribute to Shape: Inverse Mapping In this section, we describe how we reconstructed the original shape from a SAI image. In forward mapping, the two forces affect each node of the mesh: external force and internal forces. The internal force is similar to that one in forward
mapping. Concretely, it can be written as ( )QGF −=Int .
But, in the inverse mapping, the external force is the force to keep the distance
46
between the node P and its neighbor triangle 321 PPP at the distance calculated
by the simplex angle. As we have described in section 4.3.1, the simplex angle is
proportional to the distance between the node P and the triangle 321 PPP .
Here, the distance t can be calculated by solving Equation (4.9) which is a transformation of the equation of simplex angle, where r is the radius of the
circumcircle of the triangle 321 PPP , and l is the distance between the center of
the circumscribed sphere of the tetrahedron 321 PPPP and the foot of the perpen-
dicular from P to the neighbor triangle 321 PPP .
lrt
lrt
++
−= arctanarctanφ (4.9)
Therefore, the distance t can be written as Equation (4.10), where φ is the simplex angle at the node P .
( )
=
=−
−=−−
<<
−−−
<<
−−−−
=
002
2
20
tan)(tan
2tan)(tan
22
22
2222
2222
φ
πφ
πφ
πφφ
φ
πφπφφ
rl
rl
rrlr
rrlr
t (4.10)
As the result, the internal force IntF in the inverse mapping is defined using t
as ( )NPPF 1−= tExt .
4.4 Matching of two SAI images Given the two SAI images, that is, the spheres that are mapped the geometrical attributes of two original shapes onto; we can compute the correspondence be-tween them based on the following two important properties of the semi-regular
47
mesh: (1) Each mesh node has exactly three neighbors. (2) The three neighbors are fixed during deformation. This means that the correspondence between two SAI images is determined once three pairs of nodes have been matched. In Fig. 4.12, P is a node of one mesh and P′ is a node of another mesh. If P corresponds to P′ , the two neighbors
21 , PP of P are put in correspondence with two of three neighbors 321 ,, PPP ′′′ of
P′ , respectively.
P P P
P' P' P'
P1 P3 P2
P2 P2P1 P1P3 P3
P1'
P2' P2'
P1' P1'
P2'P3' P3' P3'
Figure 4.12: Local correspondence of the two semi-regular meshes The figure shows only three valid neighbors matches, since each node has ex-actly three neighbors and the connectivity among them is always preserved. Moreover, the number of such correspondences is n3 where n is the number of nodes of the semi-regular mesh. We suppose to find one of the n3 corre-spondences such that the sum of the difference between each attribute on their SAI image is minimized. 4.5 Shape interpolation by SAI images Now we learn to align two shapes on their SAI images and restore the shape from SAI images. In this section, we describe how to make the intermediate shape between two objects. It is unclear how to interpolate two shapes unless we know how to compare them; this comparison is difficult because of the unknown scale factor and rigid transformation. We have shown that it is possible to quantitatively measure the distance between two shapes using the spherical representation. Thus, we can
48
also interpolate these two shapes to obtain a new simplex angle from the spheri-cal representation of two original shapes and their correspondence. An advantage of this approach is that it shows quantitatively how the morph dif-fers from the originals. For example, at each node of the neutral mesh C , its
simplex angle Ciφ can be computed by a linear interpolation of its counterparts
in the original shapes A and B as shown in Equation (4.11), where BiAi φφ ,
are the simplex angles at the node ii BA , of the mesh BA, .
( ) BiAiCi tt φφφ +−= 1 (4.11)
4.6 Problems of the conventional SAI method In this section, we point out the problems of the conventional SAI method. 4.6.1 Asymmetry of the lines in the mesh By using SAI method, the SAI image, i.e., the attribute-mapped sphere, can re-construct the original shape, because the position of each mesh node can be cal-culated by the relative position between the node and its neighbors and the sim-plex angle mapped on the node, if the mesh keeps local regularity. However, while local regularity keeps the metric parameters, it does not affect the length of the lines in the mesh. As a result, depending on the shape of a model, the lengths of the line in the deformed mesh may differ very much. This phe-nomenon makes the inverse mapping and the shape comparison unstable. That is, if the line lengths differ, the SAI image and the shape of a semi-regular mesh cannot have one-to-one correspondence. Concretely, there may be multiple solutions of the simultaneous equations of the inverse mapping. And, depending on the position of the initial mesh or rotation of the modes, the same models may generate different SAI images. 4.6.2 Computational Costs In the deformable surface method, since the mesh is deformed and positions of each mesh node is changed, the two forces, i.e., the external and internal forces, are not constant at each stage of deformation. Therefore, the two forces have to
49
be recalculated at each stage of deformation. From the definition, if the mesh includes N nodes, calculation of the internal forces at all nodes costs )(NO in computational time even if the model includes a large amount of points. Unfortunately, the computational cost of the external forces depends on the number of points in the model. In Section 4.3.3, we have described that, by using the k-d tree data structure, the computational cost can be reduced to )log( NnO if the model includes n points. But, the computational cost is still expensive. The accuracy of shape approximation by deformable surface depends on the den-sity of the nodes of the mesh. That is, if the mesh includes more nodes, the shape of the deformed mesh is more similar to the model. Therefore, to generate precise SAI images, it is better that this computational cost is reduced. 4.7 Proposed Methods In this section, we propose two new methods to solve the problems of the con-ventional SAI method that were described in the previous section. 4.7.1 Spring Force In the conventional SAI method, the “force” F at the mesh node P can be
written as IntExt FFF βα += . Here, ExtF is the external force that affects the
similarity between the shape of the deformed mesh and the shape of the model,
and IntF is the internal force that affects the local regularity and the
semi-regular structure of the mesh itself. To solve the problem that the lengths of the lines in the mesh differ, we introduce a new force that affects the equality of the lengths of the lines in the mesh; we call this force "Spring Force." The force acts like the force of an actual spring, that is, the magnitude of the force is proportional to the displacement from the "natural length," and the direc-tion of the force is equal to the spring itself. We consider the lines in the mesh to be the springs, and make the lengths equal by using this force. Here, we use the average of the lengths of all lines in the mesh as the natural length of the imagi-nary spring.
Let P be the mesh node and let 321 ,, PPP be the three neighbors of P , the
50
spring force SpringF at P is defined as Equation (4.12), where k is the spring
constant and L is the average of the length of all lines in the mesh.
( ) ( ) ( ) PPPPPPPPPPPPF 332211 −+−+−= LkLkLkSpring (4.12)
As the result, the total force F at the node P can be written as
SpringExtInt FFFF γβα ++= .
4.7.2 Fast Search for Closest Point In the deformable surface method, we have described in the previous section that the total computational cost of deformation mainly depends on the time involved in searching for the closest point of each mesh node in the model. By using k-d tree data structure, this cost can be reduced to )log( NnO , but the search is still expensive. At each stage of deformation, the equation of motion is written as Equation (4.13).
IntExtii
dtdk
dtd
FFPP
βα +=+2
2
(4.13)
Generally, to make the deformation accurate, the parameters βα , are quite small, that is, each node is moved a very short distance at each stage of deforma-tion. Therefore, the closest point of the node is mostly moved a very short dis-tance. Now, because the model includes the positions of the points and their connec-tivity, it is possible to search the model for the "near" points of each point and record them beforehand. Here, the term "near" means that the points are con-nected directly or are connected by a direct connected point. In the conventional method, even if a k-d tree data structure is used, all points in the model are searched for at the closest point of the node. However, the position rationality of the closest point makes it possible to search only the "near" points of the closest point at the previous stage of deformation for the new closest point. This search does not guarantee the "closest" point to be the truly closest point, but it is a sufficient approximation of that. We mainly used this search, and used the full search once every 100 deformations.
51
4.8 Experiment In this section, we analyze the models of fowls' skulls by the method previously described in this chapter. The objects of the analysis are the 3-dimensional geo-metric models of fowls' skulls. The fowls include 5 breeds, 24 individuals: 5 Gifu-Jidoris, 5 Syamos, 5 Satsuma-Doris, 4 Ko-Syamos and 5 White Leghorns. Except for the European White Leghorns, which are the most popular egg-breeds, they are all native Japanese fowls. 4.8.1 Generation of SAI images from the models of the fowl’s skulls In this section, we describe how we generated the SAI images from the 3-dimensional geometric models of the fowls' skulls, and evaluate the new SAI method that we described in section 4.7. First, we visualize the process of generation of the SAI image from the model. Figure 4.13 to 4.16 show one example of the visualization. In this example, we used Syamo’s skull as shown in Fig. 4.13. Figure 4.14 shows the initial mesh of the deformable surface. Figure 4.15 shows the deformed surfaces from two dif-ferent views. By calculating the simplex angle at each mesh node and mapping it onto the sphere, we obtained the SAI image as shown in Fig. 4. 16.
Figure 4.13: The 3-dimensional geometric model of a Syamo
53
Next, we evaluate the new method that was proposed in section 4.7. The purpose of the proposed method is to equalize the lengths of the lines and to reduce the computational time. Then, we used the variance of the lengths, "external error," and the run time as the evaluation of the validity. Here, "external error" is the sum of the distance between each node of the mesh and its closest point on the model. Figure 4.17, 4.18, 4.19, and 4.20 show the external error (the left in each figure) and the variance (the right in each figure) of the following four methods: the conventional method, a method using spring forces, a method using fast search, and a method using both spring forces and fast search. Their x-axis is the run time [sec]. These graphs show that the "spring force" reduces the variance of the length of the lines of the mesh. And, by using the fast search method, although initial cal-culation takes time, calculation will be performed at high speed after the defor-mation starts. This initial calculation corresponds to preparation for searching the "near" points of all nodes.
Ext Error
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0 100 200 300 400
Ext Error
Variance
0.00E+00
5.00E-04
1.00E-03
1.50E-03
2.00E-03
2.50E-03
3.00E-03
0 100 200 300 400
Variance
Figure 4.17: The conventional method
Ext Error
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0 100 200 300 400 500
Ext Error
Variance
0.00E+00
1.00E-04
2.00E-04
3.00E-04
4.00E-04
5.00E-04
6.00E-04
7.00E-04
8.00E-04
0 100 200 300 400 500
Variance
Figure 4.18: The method using spring force
54
Ext Error
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0 20 40 60 80 100 120 140
Ext Error
Variance
0.00E+00
5.00E-04
1.00E-03
1.50E-03
2.00E-03
2.50E-03
0 20 40 60 80 100 120 140
Variance
Figure 4.19: The method using fast search
Ext Error
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0 50 100 150 200
Ext Error
Variance
0.00E+00
1.00E-04
2.00E-04
3.00E-04
4.00E-04
5.00E-04
6.00E-04
0 50 100 150 200
Variance
Figure 4.20: The method using spring forces and fast search
4.8.2 Comparison of SAI images In this section, we describe how we analyzed the SAI images generated from the models of the fowls' skulls. First, by using the method described in Section 4.4, we carried out matching the SAI images generated from the models of fowls' skulls. Next, considering the matched SAI images as the vector data, we analyzed their simplex angles by using the method described in Chapter 2. The dendro-gram of the relationships of their 3-dimensional shapes is shown in Fig. 4.21. Next, we generated the synthetic models by reconstructing the interpolation of two SAI images, which are generated from the same breed (from two Syamos in Fig. 4.22 and from two white Leghorns in Fig. 4.23). To generate them from the SAI images of the same breeds, the shapes of these imaginary models are typical of the breed.
55
Figure 4.21: The dendrogram of the relationships of the breeds
Figure 4.22: The shape of a synthetic Syamo
kosyamo #1
kosyamo #3
kosyamo #2
gifu #3
kosyamo #4
satsuma #1
satsuma #3
syamo #1
syamo #2
syamo #3
syamo #4
syamo #5
satsuma #2
satsuma #5
satsuma #4
wl #4
wl #1
wl #2
wl #3
gifu #1
gifu #2
gifu #5
gifu #4
wl #5
57
Chapter 5
Conclusion and Future Work In this thesis, we have described how we generated the 3-dimensional geometric models of the fowls' skulls, and analyzed them by using 2-dimensional and 3-dimensional analysis. We have also described the concept and algorithm of the spherical attribute image (SAI), and proposed new methods of generating SAIs. In this research, the 2-dimensional analysis quantitatively shows that the specific region named "stop" characterizes the shape of the fowls' skulls. The "stop" is regarded as an important region for classifying the skull shape of not only fowls but also of many animal species. It is very important that image analysis con-firmed this fact. In the height analysis, the results showed that the dispersion within the breed is so large that it is difficult to estimate the interspecies relationships. In fact, there is no difference between this method and the conventional osteometrical methods that precisely measure the specific parts of the specimens. The experiments using this method obtained only 1-dimensional difference, and the result suggests that the difference mainly depends on the size of the individual. But, in the curvature analysis, the result of the cluster analysis showed that same-breed individuals, with the exception of the Satsuma-Doris, are correctly classified into one cluster. This fact shows the validity of the shape comparison by this method. And, because the Satsuma-Doris are not classified into one clus-ter, since they were continuously improved for meats and for cockfighting, they are genetically various. The fact that Satsuma-Doris have a large "within vari-ance," agrees with this conjecture. In addition, the White Leghorns that are one of the artificially improved breeds like the Satsuma-Doris, have a large "within variance." On the other hand, in the 3-dimensional analysis, some same-breed individuals were not classified into the correct cluster in spite that 3-dimensional data in-clude richer information than 2-dimensional data do. We should try to decide that the result is actually caused by the difference of 3-dimensional shape or by the insufficient of the 3-dimensional analysis. Moreover, we should try to improve our method to solve the problem, if the cause is the latter.
58
References [1] Gauss, K.F., General Investigations of Curved Surfaces, Raven Press, New York, 1965 [2] Horn, B.K.P., “Extended Gaussian Image”, Proc. Of IEEE, Vol. 72, No. 12, pp.1671-1686, December 1984 [3] M. Vasilescu and D. Terzopoulos. Adaptive Meshes and Shells. Proc IEEE CVPR. Pp.829-832, 1992. [4] L. Cohen and I. Cohen. Finite Element Methods for Active Contour Models and Balloons from 2D to 3D. IEEE Trans. PAMI. Vol. 15, No. 11, pp. 1131-1147, November 1993. [5] Y. Chen and G. Medioni. Surface Description of Complex Objects from Mul-tiple Range Images. Proc. IEEE CVPR. Pp. 153-158, June, 1994. [6] T. McInerney and D. Terzopoulos. Deformable Models in Medical Image Analysis: A Survey. 1996. Medical Image Analysis 96/1 [7] K. Higuchi, M. Hebert, and K. Ikeuchi, “Building 3-D Models from Unregis-tered multiple range images”, Transactions of the Institute of Electronics, Infor-mation and Communication Engineers, Vol. J79D-II (8), No. 8, pp. 1354-1361, August 1996. [8] H. Shum, M. Herbert, and K. Ikeuchi, “”0n 3D Shape Synthesis””, CMU-CS-95-213, November 1995. [9] M. Hebert, K. Ikeuchi and H. Delingette, “A Spherical Representation for Recognition of Free-Form Surfaces”, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 17, No. 7, July 1995. [10] K. Ikeuchi and M. Hebert, “Spherical Representations: from EGI to SAI”, CMU-CS-95-197, October 1995.
59
[11] Paul J. Besl, Member, IEEE, and Neil D. McKay, “A Method for Registra-tion of 3-D Shapes”, IEEE Transactions on Pattern Analysis and Machine Intel-ligence, Vol. 14, No. 2, February 1992. [12] H. Shum, M. Hebert, K. Ikeuchi, “On 3D Shape Similarity”, CMU-CS-95-212, November 1995. [13] H. Delingette, “Simplex Meshes: a General Representations for 3D Shape Reconstruction”, INRIA report 2214, January 1995. [14] J. H. Friedman, J. L. Bentley and R. A. Finkel: “An Algorithm for Finding Best Matches in Logarithmic Expected Time” ACM Transactions on Mathmatical Software, Vol. 3, No. 3, pp.209-226, September 1977. [15] Heung-Yeung Shum, “Modeling from Reality: Representation and Integra-tion” PhD. Thesis, Carnegie Mellon University, July 1996. [16] Taku Nishikawa, Ko Nishino, Yoichi Sato, and Katsushi Ikeuchi, “Con-structing a 3D Model Using a High Resolution Range Sensor”, Proceedings of the Virtual Reality Society of Japan Fourth Annual Conference, pp199--202, September, 1999. [17] M. D. Wheeler, Y. Sato, K. Ikeuchi, “Consensus surfaces for modeling 3D objects from multiple range images”, Proc. International Conference on Com-puter Vision, January 1998. [18] Oana H. (1951) History of Japanese domestic fowl, Nihon-kei Kenkyusha [19] Hashiguchi T., Tsuneyoshi M., Nishida T., Higashiuwatoko H., Hiraoka T. (1981) Phylogenetic relationships determined by the blood protein types of fowls. Jpn.J.Zootech.Sci. 52(10):713-729 [20] Hayashi Y., Nishida T., Fujioka T., Tsugiyama I., Mochizuki K. and Tomi-moto M. (1982) Measurement of the Skull of Jungle and Domestic Fowls. Jpn. J. Vet. Sci. 44(6):1003-1006
60
[21] Nishida T., Hayashi Y., Fujioka T., Tsugiyama I. And Mochizuki K. (1985) Osteometrical Studies on the Phylogenetic of Japanese Native Fowls, Jpn. J. Vet. Sci. 47(1):25-37 [22] Okada I., Yamamoto Y., Hashiguchi T., Ito S. (1984) Phylogenetic studies on the Japanese native breeds of chickens, Japanese Poultry Science, 21 : 318-329 [23] Takahashi H. (1997) Genetic relationships between Japanese chicken breeds on micro satellite DNA polymorphisms. J. Anim. Genet., 25(2) : 53-59 [24] Tanabe Y., Iida T., Yoshino H., Shinjo A., Muramatsu S. (1991) Studies on the phylogenetic relationships of the Japanese native fowl breeds 5. The com-parisons among native fowl breeds in Japan and its adjacent areas and European and American fowl breeds. Jpn. Poult. Sci., 28 : 266-277