Transcript
Page 1: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach

A New Approach for Measuring 3D Digitalized Rape Leaf Parameters based on Images

FANG Yihang, LIN Chengda*, ZHAI Ruifang, TANG Yao, WANG Xingyu College of Resource and Environment

Huazhong Agricultural University Wuhan, China

[email protected]

Abstract—At present, measurement method of crop leaf area mainly proceeds in two-dimensional way, which may lead to certain degree of damage. Therefore, it will be of great significant to propose a non-contact method for the measurement of leaf area. In this paper, pictures of rape are obtained from different viewangles with mobile camera, and then three-dimensional digitization of rape is realized by SFM(Structure From Motion) and MVS(Multiple View Stereo) technologies. We conduct a case study and establish a three-dimensional modeling of rape leaf by applying the NURBS surface fitting. Finally, surface area of rape leaf is automatically calculated, and then compared to the area which is obtained by traditional digital image measurement methodology, in this way it realizes non-destructive detection of rapte growth modeling for precision agriculture. According to the experimental results, the the method of 3D digitalized rape leaf measurement based on images is supported.

Keywords—3D leaf model; structure from motion (SFM); multiple view stereo(MVS); leaf area measurement; precision agriculture

I. INTRODUCTION

The leaf is a significant organ for transpiration, photosynthesis and synthetic organics, and the developmental situation and size of leaves show a huge impact on the growth and development, stress resistance and yield, and meanwhile, it can also guide the cultivation density and fertilization level, plant physiology, etc. Furthermore, rape is one of the most significant crops in China [1]. Meanwhile, it is the significant raw materials of fodder, chemical engineering and energy; it also plays huge strategic and economic roles in the national economy. Consequently, it is quite urgent and necessary to propose an approach for measuring the rape leaf directly under the natural growing conditions. The development of digital agriculture and precision agriculture is of great significance, which may provide real-time and accurate basic information about the key growth stage of plants for digital agriculture and precision agriculture, so as to provide support decisions for agricultural production.

Traditionally, the leaf area was mainly measured in two-dimensional way, for instance planimeter method, transparent squares method, weighing method, empirical formula method, leaf platometer, digital image processing method, etc. However, these methods are all destructive detection, which

may cause certain damages to the leaves, destroy the continuous growth, and further impact the continuity of crop experiment. It not only wastes time and energy, but also impacts the accuracy and precision with different conditions and subjective attitudes. At present, the new measurement method mainly gains the three-dimensional point cloud data of plants in non-contact method, and then three-dimensional model is established from the three-dimensional point cloud data to measure the parameters of morphological characteristics for the plants. For instance, the achieving of three-dimensional point cloud data with laser scanning, but it has inherent defects, for instance, being short of leaf color information and texture information, expensive laser scanning device, as well as the sensitivity to experimental environment [2,3]. However, the machine vision-based form detection and modeling technology is an alternative approach of achieving the three-dimensional model of specific plants [4,5]. Researchers have already proposed several measures of plant reconstruction, including the surface section-based reconstruction, outline-based reconstruction, three-dimensional point cloud reconstruction based on stereo vision [6,7]. Furthermore, the three-dimensional point cloud reconstruction based on stereo vision can solve the problems occurring in the previous methods, and it can achieve three-dimensional model of plants, as well as the color and texture information of leaves cheaply, conveniently and non-destructively.

Fig.1 shows the workflow of three-dimensional modeling of rape leaf based on stereo vision and non-destructive measurement technology. At first, rape photos shall be obtained from different view angles, and then SFM technology [8]is applied for generating the sparse rape three-dimensional point cloud data and camera parameters from a series of non-calibration photos. On that basis, regional growing is conducted with the data generated by MVS [9] and SFM technology for producing dense three-dimensional point cloud. Later, the dense three-dimensional point cloud is obtained with artificial segmentation. Finally, NURBS surface fitting is implemented for the dense three-dimensional point cloud, and then the area of surface model is measured. Furthermore, the experimental result is compared to that obtained from traditional digital image measurement methodologies.

*Corresponding author: LIN Chengda, [email protected]

Page 2: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach

Images

Keypoints detection(SIFT Keypoint Detection)

Keypoints match(SIFT Keypoint Detection)

FilteringSegmentation

Leaf surface fitting(NURBS)

MVS(PMVS)

SFM(Bunlder)

Produce 3DDense piont cloud

Produce 3DSparse piont cloud

Fig. 1. The workflow chart

II. ILLUSTRATION OF METHOD

A. Detection and Matching of Key Point Features SIFT[10] (Scale-invariant feature transform) is an

algorithm for detecting the local features. It remains the same in rotation, scaling, and brightness variation, with certain stability in change of view, affinity variation and noise. Therefore, SIFT algorithm is adopted to detect and match the key point features. It mainly consists of four steps: 1. Detection of scale space extreme value. In the scale space, the potential interest point with invariable scale and rotation is detected with Gaussian differential function. 2. Location of key point. On the location of interest point, the location and scale of key points shall be determined. 3. Confirmation of direction. Each key point shall be assigned with direction based on the local gradient direction of image. 4. Description of key point. Local gradient of image is measured within the field of each key point.

B. Three-dimensional Digitization of Rape with SFM and MVS Technology The process of calculating the three-dimensional

information of scenes with image sequence or video is an inverse process of the imaging process. Generally, there are two steps of the calculation process: SFM (Structure From Motion) and MVS (Multi-view Stereo) . SFM, which also known as calibration, refers to a sparse three-dimensional point cloud calculated by image sequence and camera parameters of each picture (internal parameters, orientation and location). After the camera parameters of each image are obtained, the dense three-dimensional point cloud can be calculated by MVS according to the dense matching of pixel points between images.

Fig. 2. SFM Schematic Diagram

• SFM mainly refers to the process of seeking for matching points from image sequences of different angles and then calculating the corresponding three-dimensional point location and camera parameters of image according to the matching points. The theoretical foundation of SFM (Fig.2) is the geometric principle of perspective projection, namely establishing the relation between two-dimension and three-dimension by introducing a perspective projection. Suppose there is a point X in the three-dimensional space, and its corresponding two-dimensional pixel on image I is x, then, there will be a perspective projection matrix P= K[R/t] (K is the internal parameter of picture, R is the orientation of camera, and t is the camera position), which may get the projection equation x=PX established. The principle of SFM is to extract the matching two-dimensional feature point from pictures taken from different angles, namely x1, x2, ... (matching points in different images). The three-dimensional information of two-dimensional matching point, as well as the internal parameters, orientation and position of each image is calculated with the corresponding two-dimensional matching point and projection equation. In Fig.4 (a), the recovery of sparse three-dimensional point cloud data and camera parameter of each image (Bundler output result) is described. The camera parameter and sparse three-dimensional point cloud obtained from each image through the calculation of sparse matching point of two-dimensional image. Since the three-dimensional point cloud is too sparse, only simple object models can be generated (for instance, regular building, furniture, etc.). Therefore, dense three-dimensional point shall be reconstructed for complicated objects (such as plants), and dense three-dimensional point cloud requires the MVS technology.

• MVS mainly generates corresponding point cloud or grid model with the calibrated image sequence as the input. MVS mainly consists of the following types: (1) method based on voxel, which divides the space of rebuilding object into small cubes (voxel), and the matching degree of images may determine the staying of each voxel; (2). Strain grid method, the visual hull of objects can be obtained through the outlines of objects in image as the initial mesh model, and then the mesh can be transformed to the reach surface of object by taking advantage of the matching degree of images; (3). With the depth map, the corresponding depth map of each picture is calculated according to the pixel point matching among different images, and then, it is combined to be unified three-dimensional point cloud data; (4). The rebuilding object surface is expressed as a collection of normal surface patch based on the small surface patch method, and the dense three-dimensional point cloud shall be calculated with sparse three-dimensional dispersive method. In Fig. 4(b), the dense three-dimensional point cloud (Furukawa & Poncepatch-based algorithm –PMVS output result) is generated with the image. Since each three-dimensional point corresponds to more than two pixels

Page 3: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach

on different images, it must contain the color information.

C. NURBS Surface Fitting of Rape Leaf Surface fitting is conducted for the three-dimensional point

cloud with NURBS surface method (Non-Uniform Rational B-Splines) [11], which mainly provides unified algorithm for describing the free curve (surface), primary analytic curve (surface), with the advantages of flexible operation, stable calculation, rapid speed and evident geometric interpretation. NURBS surface, as surface reconstruction of quadrilateral region, can construct the interpolated quadrilateral regional surface on the data appoint of the quadrilateral mesh, and construct n-region surface. NURBS surface is a standard mathematical expression method of the geometrical shape of industrial products, and its K-order expression can be expressed as the following equation (1).

,

0

,0

( )P(u)=

( )

n

i k i ii

n

i k ii

B u WV

B u W

=

=

∑ (1)

In equation (1), iV is the control peak of surface, iW is the weight factor, and the change of weight factor can impact the location of the controlling point; , ( )i kB u is the primary function of k-order B-spline, in which the expression of

,oiB and ,i kB is shown as follows:

1,o

1, , 1 +1, 1

1 +1

1,( )

0

( ) ( ) ( )

i ii

i i ki k i k i k

i k i i k i

u u uB u

u u u uB u B u B uu u u u

+

+ +− −

+ + +

⎧ ≤ ≤⎧= ⎨⎪

⎪ ⎩⎨

− −⎪ = + ≥⎪ − −⎩, k 1

(2)

In which iu is the node of curve, and its vector is

0 1[ , ,..., ]nU u u u= , and then the expression of the NURBS surface can be obtained.

, j,l , ,

0 0

, j,l ,0 0

( ) (v) V( , )

( ) (v) V

n m

i k i j i ji j

n m

i k i ji j

B u B PP u v

B u B

= =

= =

=∑∑

∑∑ (3)

In equation (3) , ( )i kB u and j,l (v)B are the primary function of B-spline in u and v direction, the order of the primary function of B-spline in u direction is expressed in k, while that in v direction is expressed in 1. ,Vi j is the weight

factor, ,i jP is the control peak of the surface. The introduction of weight factor variable in NURBS surface equation makes the control of surface much more flexible. Changes in weight factor may get the surface more distant or closer to the control point, and as a result, the fitting of surface by the control point as greater control degree of freedom. The NURBS surface

reconstruction can carry out the surface fitting for the three-dimensional point cloud data measured directly.

Fig. 3. Rape photos obtained from different view angles

(a)Sparse point cloud(SFM) (b) Dense point cloud (MVS)

Fig. 4. Operation result of three-dimensional point cloud

III. RESULT AND DISCUSSION

A total of 320 pictures of rape is taken from different view angles (Fig.3) with iphone5s under natural lighting conditions. Feature detection and matching of local invariant feature is conducted for rape leaves with the first framework of SFM technology, and there will be a group of corresponding points among images. Feature point detection and matching of rape from different view angles are calculated with David Lowe’s SIFT demo program [12]. And then, the rape images, as well as the previously operating result (image feature matching information) is input with Noah Snavely’s Bundler program package [13]. After these procedures, a 3D reconstructed model, as well as the little geometrical information of camera and scene that can be recognized, will be the output according to the scenes reflected by the image. In Fig. 4 (a), the sparse 3D point cloud of rape is output according to Bundler program. Before establishing the 3D model of rape leaf, dense 3D point cloud set shall be generated with MVS technology. The dense 3D point cloud data is generated with the program package of Yasutaka Furukawa’s CMVS and PMVS2 [14,15], and the result is displayed in 3D software meshlab of open source in Fig. 4 (b). It has been discovered that the results preserve the main form features, texture information and color feature of leaves, and some of the rape stem is missing, but it does not impact the conduct of the entire

Page 4: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach

experiment. Before extracting the 3D point cloud data of rape leaf, other noise and background shall be eliminated through color filtering, which may guarantee that the surface fitting may not be impact by other actors.

Fig. 5. Surface fitting result of different three-dimensional point cloud leaves (green is the three-dimensional point cloud of leaf, while blue is the NURBS surface of the leaf)

In this paper, the 3D point cloud data of four rape leaves with the most complete information is obtained with artificial segmentation. And then, NURBS surface fitting is conducted with the 3D point cloud data of oilseed rape leaf, and regular and smooth 3D surface is obtained (Fig.5). Later, with the chessboard (Fig.3) as the scale, the surface area in actual environment is calculated. Finally, it will be compared to the traditional digital image measurement method (TABLE Ⅰ). The traditional digital image measurement method (Fig.6) is the binary image of leaf obtained after image segmentation, and the number of pixels included in the leaf is calculated, and then the size of each pixel shall be measured with ruler. Furthermore, the area of each pixel shall be measured. Consequently, the area of leaf shall be the total sum of the pixel in the leaf. It can be discovered from table 1 that the area of 3D leaf model is greater than that calculated with traditional method, and it is in accordance with the practical conditions. 3D model considers the fold and curve of leaf fully, and as a result, it tends to be the real area of the leaf.

Fig. 6. Digital image measuring method - ruler method

TABLE I. DIFFERENT CALCULATION METHODS OF LEAF AREA

Leaf/Area(cm2) Based-image 3D model Difference

1 85.1594 89.2220 4.0626(4.8%) 2 48.8258 51.6436 2.8178(5.8%)

3 37.1770 38.7650 1.5880(4.3%)

4 75.9882 78.3731 2.3849(3.1%)

A. Shortages Limitations of experimental condition, (a). pictures taken

with mobile phones is low in pixel, which may result in the loss of 3D point cloud data, for instance, the loss in rape stems; (b). pictures taken in outdoors may be impacted by the surrounding environment easily; (c). pictures obtained with artificial subjective shooting may result in the loss of some parts.

Limitations of algorithm, this algorithm may not be applicable for the condition that the plants are shielded severely, which may result in the chaos in key point detection and matching, as well as errors in 3D point cloud modeling.

IV. CONCLUSION

In this paper, SFM and MVS technology is successfully applied to realize the 3D digitization of rape, and 3D modeling is conducted for rape leaf to obtain the area, proving that the model is accurate and specific, and it can be selected as the 3D modeling of non-destructive detection of crops. This model can also achieve some useful physiological features of crops, for instance, leaf angle, perimeter, number of leaves, height, plant topology, etc. Furthermore, the plant growth state can be analyzed with the color information and texture features expressed by the dense 3D point cloud data. Moreover, the relationship between the form feature variations and output of corps can be analyzed through the 3D modeling of crops in different growth stage.

Research direction in the future: 1. Improving the experimental environment, obtaining high quality photos with high-end cameras; shooting photos with mechanical arm, and achieving perfected plant photos from different angles through computer setting. 2. Modeling for rape in different growth stage, and obtaining the growth mechanism model of rape. 3. Improving the algorithm, it may take some time to conduct the camera calibration and generate sparse 3D point cloud with SFM technology, and it takes more time to generate dense three-dimensional point cloud data via MVS technology, which may lead to low modeling efficiency; besides, with the increase of photos, more time will be required. However, the latest SLAM (simultaneous localization and mapping) technology - Real time SFM techniques from a free moving video camera – can not only conduct 3D modeling for plants rapidly in real time, but also measure the morphological characteristics of plants accurately. Eventually, we will explore the application of combined SLAM technology and robot technology in precision agriculture and digital agriculture, so as to conduct 3D modeling and real-time monitoring for plants and to improve the planting efficiency and increase the output.

ACKNOWLEDGMENT

Research was supported by National Natural Science Foundation of China (NNSF Grant No. 41301522 and No.41101409), and Research Fund for the Doctoral Program of Higher Education (SRFDP, Grant No. 20110146120012).

Page 5: [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach

REFERENCES

[1] S.H. Jo, K.S. Rhim , I.H. Oh, "Measuring tobacco leaf area by

Numerical integration of asymptotic regression equation," Applied Engineering in Agriculture, vol. 17(1), pp.107-109, 2001

[2] S. Delagrange, P. Rochon, “Reconstruction and analysis of a deciduous sapling using digital photographs or terrestrial-LiDAR technology,” Annals of botany, vol. 108, pp. 991–1000, 2011.

[3] C. Preuksakarn, F, Boudon, P. Ferraro, J-B. Durand, E, Nikinmaa, C. Godin, "Reconstructing plant architecture from 3D laser scanner data," In Proc. of the 6th Intl, Workshop on Functional-Structural Plant Models, 2010, pp. 14-16.

[4] B. Biskup, H. Scharr, U. Schurr, U. Rascher. "A stereo imaging system for measuring structural parameters of plant canopies," Plant, Cell & Environment, vol. 30, pp: 1299–1308, 2007.

[5] L. Quan, P. Tan, G. Zeng, L. Yuan, J. Wang, SB. Kang, "Image-based plant modelling," ACM Transactions on Graphics, vol. 25, pp. 599–604, 2006.

[6] TT. Santos, AA. Oliveira, "Image-based 3D digitizing for plant architecture analysis and phenotyping," In Workshop on Industry Applications (WGARI) in SIBGRAPI 2012 (XXV Conference on Graphics, Patterns and Images), 2012, pp. 21-28.

[7] T.Santos, J. Ueda. "Automatic 3D plant reconstruction from photographies, segmentation and classification of leaves and internodes using clustering," in Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, June 2013, pp.95-97.

[8] B. Biskup, H. Scharr, U. Schurr, and U. Rascher, “A stereo imaging system for measuring structural parameters of plant canopies,” Plant, Cell & Environment, vol. 30, no. 10, pp. 1299–1308, 2007.

[9] S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski,“A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms,” in CVPR ’06: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1. Washington, DC, USA: IEEE Computer Society, Jun. 2006, pp. 519–528.

[10] D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60(2), pp. 91-110, January 2004.

[11] L. Piegl, "On NURBS: a survey," IEEE Computer Graphics and Applications, vol. 11(1), pp. 55-71, 1991.

[12] D. Lowe, “Demo software: SIFT keypoint detector,” July 2005.[Online]. Available: http://www.cs.ubc.ca/~lowe/keypoints/

[13] N. Snavely, “Bundler: Structure from Motion (SfM) for Unordered Image Collections,” April 2010. [Online]. Available: http://phototour.cs.washington.edu/bundler/

[14] Y. Furukawa, “Clustering Views for Multi-view Stereo (CMVS).”[Online].Available:http://grail.cs.washington.edu/software/cmvs/

[15] Y. Furukawa, J. Ponce “Patch-based Multi-view Stereo Software (PMVS).”[Online].Available:http://grail.cs.washington.edu/software/pmvs/


Top Related