[ieee 2014 third international conference on agro-geoinformatics - beijing, china...

4
High-Fidelity 3D Plants Model Reconstructed based on Color Structured Light Kai Si 1, 3 ; Jian Zhang 1, 3 ; Zongnan Li 1, 3 ; Zhenyu Guo 1, 3 ; Xiuxiu Lu 1, 3 ; Jing Xie 2* 1College of Resources and Environment, Huazhong Agricultural University, Wuhan 430070, China 2College of Science, Huazhong Agricultural University, Wuhan 430070, China 3Key Laboratory of Arable Land Conservation (Middle and Lower Reaches of Yangtse River), Ministry of Agriculture, Wuhan, 430070, China Abstract—In order to construct the high-precision and high- fidelity 3D plants model quickly and economically, this study present a new approach to combing the color structured light with silhouette-based method to repair the model’s depth information. First, project the coded color structured light which is coded using spatial code method on the surface of the plants. And photograph the plants using two non-contract cameras from different angles. Second, we need to extract the center of the stripes and decode the structured light base on color space conversion. Then, gain the feature points using the method of color stripes matching base on epipolar constraint. Finally, calibration the camera and computation of feature points’ coordinates. Reconstruct the 3D model of plants base on the feature points. Experimental results show that the method could repair the 3D model’s depth information effectively. Key words: Color structured light; 3D reconstruction; High- precision; High-fidelity I. INTRODUCTION In agricultural production and research, it is very important to understand plants morphological characteristics. Because morphological characteristics are useful for seed breeding, nutrition diagnosis, growth detection and so on. At present, manual measurement is the most primary method to capture plants morphological characteristics parameters and the method has low-efficiency, difficulty, subjectivity, big errors and other defects. With the computer vision systems widely used in agricultural research, capturing the morphological characteristics parameters from plants 3D model has become the focused issues. So far, many researches about the 3D model reconstruction of plants has been reported. The different approaches of 3D plants model construction can be roughly classified into three categories: based on L- systems, silhouette-based laser scanning. For L-systems method, if we want to reconstruct plants 3D model we need to firstly capture the plants morphological parameters by manual measurement. Then the 3D plants model can be structured base on the plants morphological parameters. The 3D model reconstructed in this way not only has low accuracy, but also wastes time and energy [1, 2]. The silhouette-based reconstruction algorithm is based on the image sequences photographed from different angles reconstruct plants 3D model. But the depth information cannot be reconstructed through this operation, the 3D model has low-accuracy and low-fidelity for big leave plants either [3, 4]. Laser 3D scan technology reconstruct plants 3D model through capture feature points. But the device of this method is expensive. Besides the model reconstructed in this method doesn’t have texture [5, 6]. In order to construct the high-precision and high-fidelity 3D plants model quickly and economically, this study proposes a method that combine the color structured light with silhouette-based method to repair the model’s depth information which couldn’t be reconstructed by the silhouette- based method. This method can improve the precision of the 3D model to a certain extent. II. 3D MODEL RECONSTRUCTION A. Camera Calibration Camera calibration is one of the most important steps for 3D reconstruction. The purpose of camera calibration is to calculate the intrinsic and extrinsic parameters of the camera base on the known corresponding points in world coordinate system and image coordinate system. We can define the relationship that one point transform from three dimensional geometry spaces to the image coordinate system. There are many approaches of camera calibration at present stage, such as the method base on transformation matrix of perspective [7], the method base on radial alignment constraint [8], the method of a flexible new technique for camera calibration proposed by Zhengyou Zhang [9], the method based on data from two calibration planes [10] and so on. FIGURE 1. CAMERA IMAGING MODEL As figure 1 shows camera imaging modelc o is the center of camera, - c c c c o xyz is the camera coordinate system, _ o xy is the camera imaging plane coordinate system, *Corresponding author: Tel.:+8615926460100; E-mail address: [email protected] (Jing Xie)

Upload: jing

Post on 27-Mar-2017

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - High-fidelity

High-Fidelity 3D Plants Model Reconstructed based on Color Structured Light

Kai Si1, 3; Jian Zhang1, 3; Zongnan Li1, 3; Zhenyu Guo1, 3; Xiuxiu Lu1, 3; Jing Xie2* 1College of Resources and Environment, Huazhong Agricultural University, Wuhan 430070, China

2College of Science, Huazhong Agricultural University, Wuhan 430070, China 3Key Laboratory of Arable Land Conservation (Middle and Lower Reaches of Yangtse River), Ministry of Agriculture,

Wuhan, 430070, China

Abstract—In order to construct the high-precision and high-fidelity 3D plants model quickly and economically, this study present a new approach to combing the color structured light with silhouette-based method to repair the model’s depth information. First, project the coded color structured light which is coded using spatial code method on the surface of the plants. And photograph the plants using two non-contract cameras from different angles. Second, we need to extract the center of the stripes and decode the structured light base on color space conversion. Then, gain the feature points using the method of color stripes matching base on epipolar constraint. Finally, calibration the camera and computation of feature points’ coordinates. Reconstruct the 3D model of plants base on the feature points. Experimental results show that the method could repair the 3D model’s depth information effectively.

Key words: Color structured light; 3D reconstruction; High-precision; High-fidelity

I. INTRODUCTION In agricultural production and research, it is very important

to understand plants morphological characteristics. Because morphological characteristics are useful for seed breeding, nutrition diagnosis, growth detection and so on. At present, manual measurement is the most primary method to capture plants morphological characteristics parameters and the method has low-efficiency, difficulty, subjectivity, big errors and other defects. With the computer vision systems widely used in agricultural research, capturing the morphological characteristics parameters from plants 3D model has become the focused issues. So far, many researches about the 3D model reconstruction of plants has been reported. The different approaches of 3D plants model construction can be roughly classified into three categories: based on L- systems, silhouette-based, laser scanning. For L-systems method, if we want to reconstruct plants 3D model we need to firstly capture the plants morphological parameters by manual measurement. Then the 3D plants model can be structured base on the plants morphological parameters. The 3D model reconstructed in this way not only has low accuracy, but also wastes time and energy [1, 2]. The silhouette-based reconstruction algorithm is based on the image sequences photographed from different angles reconstruct plants 3D model. But the depth information cannot be reconstructed through this operation, the 3D model has low-accuracy and low-fidelity for big leave plants either [3, 4]. Laser 3D scan technology reconstruct plants 3D model through capture feature points. But the device of this method is expensive.

Besides the model reconstructed in this method doesn’t have texture [5, 6].

In order to construct the high-precision and high-fidelity 3D plants model quickly and economically, this study proposes a method that combine the color structured light with silhouette-based method to repair the model’s depth information which couldn’t be reconstructed by the silhouette-based method. This method can improve the precision of the 3D model to a certain extent.

II. 3D MODEL RECONSTRUCTION

A. Camera Calibration Camera calibration is one of the most important steps for

3D reconstruction. The purpose of camera calibration is to calculate the intrinsic and extrinsic parameters of the camera base on the known corresponding points in world coordinate system and image coordinate system. We can define the relationship that one point transform from three dimensional geometry spaces to the image coordinate system. There are many approaches of camera calibration at present stage, such as the method base on transformation matrix of perspective [7], the method base on radial alignment constraint [8], the method of a flexible new technique for camera calibration proposed by Zhengyou Zhang [9], the method based on data from two calibration planes [10] and so on.

FIGURE 1. CAMERA IMAGING MODEL

As figure 1 shows camera imaging model, co is the center

of camera, -c c c co x y z is the camera coordinate system, _o xy is the camera imaging plane coordinate system,

*Corresponding author: Tel.:+8615926460100; E-mail address: [email protected] (Jing Xie)

Page 2: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - High-fidelity

-w w w wo x y z is the world coordinate system, [ , ]u v is the image coordinate system. M-point is a space point. Its coordinate is expressed as ( , , )w c c cX x y z in the world coordinate system

and expressed as ( , , )c c c cX x y z in the camera coordinate system. N-point is the point in the camera imaging plane projected by the M-point. The N-point’s coordinate is expressed as ( , )N x y in the camera imaging plane coordinate system and expressed as ( , )U u v in the image coordinate system. The M-point coordinate transform from the world coordinate system to the image coordinate system can now be expressed by equation(1).

( , ) wsU K R T X= (1)

Where the 0

00

0 0 1

x

y

f s u

K f v=

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

is the internal parameters of

camera, xf , yf is the scale factors on the u-axis and v-axis,

11 12 13

21 22 23

31 32 33

r r r

R r r r

r r r

=

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

is the rotation matrix of camera, 1

2

3

t

T t

t

=

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

is the translation vector, ( , )R T is the extrinsic parameters of the camera, is a constant but unequal zero, let ( , )P K R T= and P is called projection matrix.

The equation (1) can convert to equation (2).

[ ] [ ]1 2 3 1 2

1 1 1

01 1 1

1

xu x x

yv K r r r T K r r T y H y

s s s= = =

⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦

(2)

Where the column vectors of rotation matrix is expressed by ir , H is a homography matrix, now using ih express the column vectors of the homography matrix, H can be expressed by

[ ] [ ]1 2 3 1 2

1H h h h s K r r T= = • (3)

-1 -1

1 1 2 2 r sK h r sK h= =, (4)

Due to the rotation matrix R is orthogonal unit matrix, we can acquire two basic constraint qualifications from every image as following.

1

1 20T Th K K h− − = (5)

1 1

1 1 2 2

T T T Th K K h h K K h− − − −= (6)

We have known that the camera has five internal parameters. So when we gain three or more images the internal parameters of camera can be figured out. Then we can obtain the extrinsic parameters of the camera according to the relationship between internal parameters and extrinsic parameters. Finally the projection matrix of camera is worked out. This paper uses the method of a flexible new technique for camera calibration proposed by Zhengyou Zhang [9] to solve the problem of camera calibration.

B. Code the Color Structured Light 1) Encoding Method The approaches of coding color structured light are various

now. This study use the spatial encoding method to encoding color structured light and generate de Bruijn sequences. For the spatial coding method, the code word of every pixel is decided by the specific sequence and geometric shape which is consisted of the gray and hue of the pixel itself and its adjacent pixel. So it can include more useful information than the method of time domain encoding [11]. De Bruijn sequence

is a kind of pseudo-random sequence with the characteristics of advance determinability and reproducibility. For a k-ary n-stage De Bruijn sequence, it means that the sequence is composed by k elements and every subsequence has n elements only appear once. That is to say the De Bruijn sequence has the characteristic of one dimensional window.

We need to select appropriate colorized stripes before coding color structured light. In order to distinguish different stripes more easily, we choose three primary colors (red, green, blue) and their complementary colors (magenta, yellow, blueness) as the colors of the stripes. We respectively use the number 1, 2, 3, 4, 5, 6 denote red, green, blue, magenta, yellow, blueness. Now we use the six numbers as the basic element and choose three as the size of the window to generate the De Bruijn sequence base on the online generator [12]. Thus we could obtain a color structured light image with 6-ary 3-stage De Bruijn sequence as the figure 2. In the image, every strip is three Pixels wide and the pixels in every strip center are the brightest and the farther from the center the more dark.

FIGURE2. IMAGE OF COLOR STRUCTURED LIGHT

2) Project the Color Structured Light

We placed the plant in the center of a circular turntable. We will photograph the plant using one non-contract camera before projecting the coded color structured light firstly. Then

Page 3: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - High-fidelity

projecting the coded color structured light on surface of the plant and photographing the plant use two non-contract cameras from different angles as the figure 3. Both the twice is rotating the circular turntable all the round, shooting once every fifteen degree. Thus, we would obtain 72 plant pictures in coded color structured light finally.

A Image by right camera B Image by left camera

FIGURE3. TWO CAMERA PHOTOGRAPH THE PROJECTED COLOR STRUCTURED LIGHT COTTON

C. Decode the Color Structured Light Because of the bumpy surface of the plant, the color of the

plant, the mutual interference between the stripes different color, the noise produce by the camera and some other factors influence, the stripes’ color, position and shape of the images we captured are changed to a certain extent. In order to calculate the center pixel’s 3D coordinate values of the color stripes, we need to decode the structured light projected on the plant’s surface.

Firstly, we extract the central pixels of the color strips base on gradient analysis [13] and deal with the modulated images using the method of graying and normalization. Because of the noise impact, we using image filter to remove the noise. Then we arrange the gray value of the pixels from left to right, from up to down and get a matrix I. Now we assume that the matrix I has u rows and v columns. The pixel gray value of every rows is showing a wave form shape as the figure 4. We firstly seek out the pixel location of wave trough and wave crest. Then we work out the intermediate point of the neighbor wave trough and wave crest. Finally, we find the maximal or minimal pixel gray value between two neighbor intermediate point and the pixel with maximal or smallest pixel gray value is the feature points captured. All the feature points constitute the central pixels of the color strips.

FIGURE 4 THE WAVEFORM SHAPE OF PIXEL GRAY VALUE

The purpose of separating the color is to separate the stripes center’s corner which is unrelated with the light and shade of strips, so we transform the images from RGB format to HIS format base color transformation theory [14]. The HIS format is expressed by hues H, lightness I and saturation S. Then we calculate the H, I and S and use the fixed threshold value to separate the different color on the basis of H.

D. Calculate 3D Coordinate and Reconstruct 3D Model We will calculate the 3D coordinate values of the feature

points based on epipolar constraint conditions [15] and obtain the 3D point cloud information. Then connecting the feature points creating triangles by the Delaunay method. The 3D visual shell of the plant was generated from such elemental triangles. And the deep information is obvious as the figure 5A. Finally, projecting the real texture of the plant on the 3D model surface and acquired a plant 3D model with real texture as the figure 5B.

A visual shell B model with real texture

FIGURE5. THE 3D MODEL OF COTTON BLADE

III. RESULTS ANALYSIS Figure 6B shows the result of the 3D reconstruction from a

cotton plant, and the figure 6A is the true morphology of the cotton plant. Satisfactory results were obtained in 3D reconstruction of cotton plant through this method, and the details of the leaves are well reproduced.

A the real plants B the reconstruction results

FIGURE 6. THE COMPARISON OF THE REAL PLANT AND THE RECONSTRUCTION RESULT

We measure the plant height, blade width and leaf angle of real cotton plants and its 3D model as the parameters to verify the accuracy of the 3D plant models. In order to insure the

Page 4: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - High-fidelity

veracity of the values of the parameters we measured. We marking on the blades and leaf stalks. Then we measure the distance and the included angle on the real plants and the 3D models many times directly and averaging the values of several measurement as the values of the parameters. We compare analyses between the two series data. Table 1 shows the analysis results.

The results of analysis show that this method works well. The relative errors of plant height, blade width and leaf angle are between 1% and 4%, which indicates that the 3D models that rebuild through this method are of high accuracy and fidelity.

TABLE I. THE ANALYSIS RESULTS OF REAL DATA AND MODEL DATA

blade width leaf angle plant height

leaves Number 1 2 3 4 1 2 3 4

Real Values 41.18 50.80 70.56 72.00 76.00 60.00 48.00 39.00 155.00

Model Values 39.69 50.12 68.59 70.17 74.50 62.00 47.00 40.00 158.00

Relative error 4% 1% 3% 3% 2% 3% 2% 3% 2%

IV. CONCLUSION In this study, we combined color structured light to repair

the 3D model's depth information that the silhouette-based method can't restructure, it is economical and effective. The results of experiment indicates that this method can restructure 3D models of high precision and fidelity, truly representing the morphology of plants. And this method provides a convenient way to breed and analysis the nutrition status of plants, which will promote the development of the study of plant morphology. This study can improve the precision of the 3D model to a certain extent. But the result is not good for the tip of blade, besides the blade is likely to defect for the thickness thinner blade, also the leaf stalk is likely to crack if it is too slender. Further research is needed if we want to improve the accuracy of the result.

ACKNOWLEDGMENTS This study was supported by National Natural Science

Foundation of China(Grant No. 41201364), The Fundamental Research Funds for the Central Universities(Grant No. 2014JC008、2011QC040),Hubei Provincial Natural Science Foundation of China (Grant No. 2010CDB099),and The national college students' innovative training program(Grant No.201410504023、201310504002).

REFERENCES [1] Shlyakhter I, Teller S, Rozenoer M, et al. Reconstructing 3D tree models

from instrumented photographs[J]. IEEE Computer Graphics and Applications, 2001, 21(3): 53-61.

[2] Chuanyu Wang, Ming Zhao, Jianhe Yan, et al. Three-dimensional reconstruction of maize leaves based on binocular stereovision system[J]. Transactions of the CSAE, 2010, 26(4): 198 - 202. (in Chinese with English abstract)

[3] Phattaralerphong J, Sinoquet H. A method for 3D reconstruction of tree crown volume from photographs: assessment with 3D-digitized plants[J]. Tree Physiology, 2005, 25(10): 1229-1242.

[4] Eisert P, Steinbach E, Girod B. Automatic reconstruction of stationary 3-D objects from multiple uncalibrated camera views[J]. Circuits and Systems for Video Technology, IEEE Transactions on, 2000, 10(2): 261-277.

[5] Paulus S, Schumann H, Kuhlmann H, et al. High-precision laser scanning system for capturing 3D plant architecture and analysing growth of cereal plants[J]. Biosystems Engineering, 2014, 121: 1-11.

[6] Wei Xueli, Xiao Boxiang, Guo Xinyu, et al. Analysis of applications of 3D laser scan technology in plant ccanning[J] [J]. Chinese Agricultural Science Bulletin, 2010,26(20):373-377. (in Chinese with English abstract)

[7] uh J Y S, Klaasen J A. A three-dimensional vision by off-shelf system with multi-cameras[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 1985 (1): 35-45.

[8] Tsai R Y. An efficient and accurate camera calibration technique for 3D machine vision[C]//Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 1986. 1986.

[9] Zhang Z. A flexible new technique for camera calibration[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2000, 22(11): 1330-1334.

[10] Martins H A, Birk J R, Kelley R B. Camera models based on data from two calibration planes[J]. Computer Graphics and Image Processing, 1981, 17(2): 173-180.

[11] Zhengliang Wei. Research on the technique of dynamic 3D measurement by projecting structured light base on color code and model reconstruction[D]. Tsinghua University, 2009. (in Chinese with English abstract)

[12] Ruskey F. The Object Server Generation Index Page(COS). http://theory.cs.uvic.ca/cos.html,2000

[13] Jingtao Fan, Cheng Han, Chao Zhang, et al. Study of a new decoding technology for De Bruijn structured light[J]. Acta Electronica sinica, 2012, 40(3): 483-488. (in Chinese with English abstract)

[14] Weiyi Liu, Zhaoqi Wang, Guoguang Mu, et al. Color Distinction and its application in color-coded grating profilometry[J]. Acta Optica Sinica, 2001,21(3): 454-458. (in Chinese with English abstract)

[15] Haijia Ye, Gang Chen, Yuan Xing. Stero matching in 3D measurement system using double CCD structured light[J][J]. Optics and Precision Engineening, 2004, 12(1): 71-75. (in Chinese with English abstract)