[IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach for measuring 3D digitalized rape leaf parameters based on images

Download [IEEE 2014 Third International Conference on Agro-Geoinformatics - Beijing, China (2014.8.11-2014.8.14)] 2014 The Third International Conference on Agro-Geoinformatics - A new approach for measuring 3D digitalized rape leaf parameters based on images

Post on 11-Mar-2017

213 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>A New Approach for Measuring 3D Digitalized Rape Leaf Parameters based on Images </p><p>FANG Yihang, LIN Chengda*, ZHAI Ruifang, TANG Yao, WANG Xingyu College of Resource and Environment </p><p>Huazhong Agricultural University Wuhan, China </p><p>linchengda@mail.hzau.edu.cn </p><p>AbstractAt present, measurement method of crop leaf area mainly proceeds in two-dimensional way, which may lead to certain degree of damage. Therefore, it will be of great significant to propose a non-contact method for the measurement of leaf area. In this paper, pictures of rape are obtained from different viewangles with mobile camera, and then three-dimensional digitization of rape is realized by SFM(Structure From Motion) and MVS(Multiple View Stereo) technologies. We conduct a case study and establish a three-dimensional modeling of rape leaf by applying the NURBS surface fitting. Finally, surface area of rape leaf is automatically calculated, and then compared to the area which is obtained by traditional digital image measurement methodology, in this way it realizes non-destructive detection of rapte growth modeling for precision agriculture. According to the experimental results, the the method of 3D digitalized rape leaf measurement based on images is supported. </p><p>Keywords3D leaf model; structure from motion (SFM); multiple view stereo(MVS); leaf area measurement; precision agriculture </p><p>I. INTRODUCTION The leaf is a significant organ for transpiration, </p><p>photosynthesis and synthetic organics, and the developmental situation and size of leaves show a huge impact on the growth and development, stress resistance and yield, and meanwhile, it can also guide the cultivation density and fertilization level, plant physiology, etc. Furthermore, rape is one of the most significant crops in China [1]. Meanwhile, it is the significant raw materials of fodder, chemical engineering and energy; it also plays huge strategic and economic roles in the national economy. Consequently, it is quite urgent and necessary to propose an approach for measuring the rape leaf directly under the natural growing conditions. The development of digital agriculture and precision agriculture is of great significance, which may provide real-time and accurate basic information about the key growth stage of plants for digital agriculture and precision agriculture, so as to provide support decisions for agricultural production. </p><p>Traditionally, the leaf area was mainly measured in two-dimensional way, for instance planimeter method, transparent squares method, weighing method, empirical formula method, leaf platometer, digital image processing method, etc. However, these methods are all destructive detection, which </p><p>may cause certain damages to the leaves, destroy the continuous growth, and further impact the continuity of crop experiment. It not only wastes time and energy, but also impacts the accuracy and precision with different conditions and subjective attitudes. At present, the new measurement method mainly gains the three-dimensional point cloud data of plants in non-contact method, and then three-dimensional model is established from the three-dimensional point cloud data to measure the parameters of morphological characteristics for the plants. For instance, the achieving of three-dimensional point cloud data with laser scanning, but it has inherent defects, for instance, being short of leaf color information and texture information, expensive laser scanning device, as well as the sensitivity to experimental environment [2,3]. However, the machine vision-based form detection and modeling technology is an alternative approach of achieving the three-dimensional model of specific plants [4,5]. Researchers have already proposed several measures of plant reconstruction, including the surface section-based reconstruction, outline-based reconstruction, three-dimensional point cloud reconstruction based on stereo vision [6,7]. Furthermore, the three-dimensional point cloud reconstruction based on stereo vision can solve the problems occurring in the previous methods, and it can achieve three-dimensional model of plants, as well as the color and texture information of leaves cheaply, conveniently and non-destructively. </p><p>Fig.1 shows the workflow of three-dimensional modeling of rape leaf based on stereo vision and non-destructive measurement technology. At first, rape photos shall be obtained from different view angles, and then SFM technology [8]is applied for generating the sparse rape three-dimensional point cloud data and camera parameters from a series of non-calibration photos. On that basis, regional growing is conducted with the data generated by MVS [9] and SFM technology for producing dense three-dimensional point cloud. Later, the dense three-dimensional point cloud is obtained with artificial segmentation. Finally, NURBS surface fitting is implemented for the dense three-dimensional point cloud, and then the area of surface model is measured. Furthermore, the experimental result is compared to that obtained from traditional digital image measurement methodologies. </p><p>*Corresponding author: LIN Chengda, linchengda@mail.hzau.edu.cn</p></li><li><p>Images</p><p>Keypoints detection(SIFT Keypoint Detection)</p><p>Keypoints match(SIFT Keypoint Detection)</p><p>FilteringSegmentation</p><p>Leaf surface fitting(NURBS)</p><p>MVS(PMVS)</p><p>SFM(Bunlder)</p><p>Produce 3DDense piont cloud</p><p>Produce 3DSparse piont cloud</p><p>Fig. 1. The workflow chart </p><p>II. ILLUSTRATION OF METHOD </p><p>A. Detection and Matching of Key Point Features SIFT[10] (Scale-invariant feature transform) is an </p><p>algorithm for detecting the local features. It remains the same in rotation, scaling, and brightness variation, with certain stability in change of view, affinity variation and noise. Therefore, SIFT algorithm is adopted to detect and match the key point features. It mainly consists of four steps: 1. Detection of scale space extreme value. In the scale space, the potential interest point with invariable scale and rotation is detected with Gaussian differential function. 2. Location of key point. On the location of interest point, the location and scale of key points shall be determined. 3. Confirmation of direction. Each key point shall be assigned with direction based on the local gradient direction of image. 4. Description of key point. Local gradient of image is measured within the field of each key point. </p><p>B. Three-dimensional Digitization of Rape with SFM and MVS Technology The process of calculating the three-dimensional </p><p>information of scenes with image sequence or video is an inverse process of the imaging process. Generally, there are two steps of the calculation process: SFM (Structure From Motion) and MVS (Multi-view Stereo) . SFM, which also known as calibration, refers to a sparse three-dimensional point cloud calculated by image sequence and camera parameters of each picture (internal parameters, orientation and location). After the camera parameters of each image are obtained, the dense three-dimensional point cloud can be calculated by MVS according to the dense matching of pixel points between images. </p><p>Fig. 2. SFM Schematic Diagram </p><p> SFM mainly refers to the process of seeking for matching points from image sequences of different angles and then calculating the corresponding three-dimensional point location and camera parameters of image according to the matching points. The theoretical foundation of SFM (Fig.2) is the geometric principle of perspective projection, namely establishing the relation between two-dimension and three-dimension by introducing a perspective projection. Suppose there is a point X in the three-dimensional space, and its corresponding two-dimensional pixel on image I is x, then, there will be a perspective projection matrix P= K[R/t] (K is the internal parameter of picture, R is the orientation of camera, and t is the camera position), which may get the projection equation x=PX established. The principle of SFM is to extract the matching two-dimensional feature point from pictures taken from different angles, namely x1, x2, ... (matching points in different images). The three-dimensional information of two-dimensional matching point, as well as the internal parameters, orientation and position of each image is calculated with the corresponding two-dimensional matching point and projection equation. In Fig.4 (a), the recovery of sparse three-dimensional point cloud data and camera parameter of each image (Bundler output result) is described. The camera parameter and sparse three-dimensional point cloud obtained from each image through the calculation of sparse matching point of two-dimensional image. Since the three-dimensional point cloud is too sparse, only simple object models can be generated (for instance, regular building, furniture, etc.). Therefore, dense three-dimensional point shall be reconstructed for complicated objects (such as plants), and dense three-dimensional point cloud requires the MVS technology. </p><p> MVS mainly generates corresponding point cloud or grid model with the calibrated image sequence as the input. MVS mainly consists of the following types: (1) method based on voxel, which divides the space of rebuilding object into small cubes (voxel), and the matching degree of images may determine the staying of each voxel; (2). Strain grid method, the visual hull of objects can be obtained through the outlines of objects in image as the initial mesh model, and then the mesh can be transformed to the reach surface of object by taking advantage of the matching degree of images; (3). With the depth map, the corresponding depth map of each picture is calculated according to the pixel point matching among different images, and then, it is combined to be unified three-dimensional point cloud data; (4). The rebuilding object surface is expressed as a collection of normal surface patch based on the small surface patch method, and the dense three-dimensional point cloud shall be calculated with sparse three-dimensional dispersive method. In Fig. 4(b), the dense three-dimensional point cloud (Furukawa &amp; Poncepatch-based algorithm PMVS output result) is generated with the image. Since each three-dimensional point corresponds to more than two pixels </p></li><li><p>on different images, it must contain the color information. </p><p>C. NURBS Surface Fitting of Rape Leaf Surface fitting is conducted for the three-dimensional point </p><p>cloud with NURBS surface method (Non-Uniform Rational B-Splines) [11], which mainly provides unified algorithm for describing the free curve (surface), primary analytic curve (surface), with the advantages of flexible operation, stable calculation, rapid speed and evident geometric interpretation. NURBS surface, as surface reconstruction of quadrilateral region, can construct the interpolated quadrilateral regional surface on the data appoint of the quadrilateral mesh, and construct n-region surface. NURBS surface is a standard mathematical expression method of the geometrical shape of industrial products, and its K-order expression can be expressed as the following equation (1). </p><p> ,</p><p>0</p><p>,0</p><p>( )P(u)=</p><p>( )</p><p>n</p><p>i k i ii</p><p>n</p><p>i k ii</p><p>B u WV</p><p>B u W</p><p>=</p><p>=</p><p> (1) </p><p>In equation (1), iV is the control peak of surface, iW is the weight factor, and the change of weight factor can impact the location of the controlling point; , ( )i kB u is the primary function of k-order B-spline, in which the expression of </p><p>,oiB and ,i kB is shown as follows: </p><p>1,o</p><p>1, , 1 +1, 1</p><p>1 +1</p><p>1,( )</p><p>0</p><p>( ) ( ) ( )</p><p>i ii</p><p>i i ki k i k i k</p><p>i k i i k i</p><p>u u uB u</p><p>u u u uB u B u B uu u u u</p><p>+</p><p>+ + </p><p>+ + +</p><p> = </p><p> = + k 1</p><p>(2) </p><p>In which iu is the node of curve, and its vector is </p><p>0 1[ , ,..., ]nU u u u= , and then the expression of the NURBS surface can be obtained. </p><p> , j,l , ,</p><p>0 0</p><p>, j,l ,0 0</p><p>( ) (v) V( , )</p><p>( ) (v) V</p><p>n m</p><p>i k i j i ji j</p><p>n m</p><p>i k i ji j</p><p>B u B PP u v</p><p>B u B</p><p>= =</p><p>= =</p><p>=</p><p> (3) </p><p>In equation (3) , ( )i kB u and j,l (v)B are the primary function of B-spline in u and v direction, the order of the primary function of B-spline in u direction is expressed in k, while that in v direction is expressed in 1. ,Vi j is the weight factor, ,i jP is the control peak of the surface. The introduction of weight factor variable in NURBS surface equation makes the control of surface much more flexible. Changes in weight factor may get the surface more distant or closer to the control point, and as a result, the fitting of surface by the control point as greater control degree of freedom. The NURBS surface </p><p>reconstruction can carry out the surface fitting for the three-dimensional point cloud data measured directly. </p><p>Fig. 3. Rape photos obtained from different view angles </p><p> (a)Sparse point cloud(SFM) (b) Dense point cloud (MVS) </p><p>Fig. 4. Operation result of three-dimensional point cloud </p><p>III. RESULT AND DISCUSSION A total of 320 pictures of rape is taken from different view </p><p>angles (Fig.3) with iphone5s under natural lighting conditions. Feature detection and matching of local invariant feature is conducted for rape leaves with the first framework of SFM technology, and there will be a group of corresponding points among images. Feature point detection and matching of rape from different view angles are calculated with David Lowes SIFT demo program [12]. And then, the rape images, as well as the previously operating result (image feature matching information) is input with Noah Snavelys Bundler program package [13]. After these procedures, a 3D reconstructed model, as well as the little geometrical information of camera and scene that can be recognized, will be the output according to the scenes reflected by the image. In Fig. 4 (a), the sparse 3D point cloud of rape is output according to Bundler program. Before establishing the 3D model of rape leaf, dense 3D point cloud set shall be generated with MVS technology. The dense 3D point cloud data is generated with the program package of Yasutaka Furukawas CMVS and PMVS2 [14,15], and the result is displayed in 3D software meshlab of open source in Fig. 4 (b). It has been discovered that the results preserve the main form features, texture information and color feature of leaves, and some of the rape stem is missing, but it does not impact the conduct of the entire </p></li><li><p>experiment. Before extracting the 3D point cloud data of rape leaf, other noise and background shall be eliminated through color filtering, which may guarantee that the surface fitting may not be impact by other actors. </p><p>Fig. 5. Surface fitting result of different three-dimensional point cloud leaves (green is the three-dimensional point cloud of leaf, while blue is the NURBS surface of the leaf) </p><p>In this paper, the 3D point cloud data of four rape leaves with the most complete information is obtained with artificial segmentation. And then, NURBS surface fitting is conducted with the 3D point cloud data of oilseed rape leaf, and regular and smooth 3D surface is obtained (Fig.5). Later, with the chessboard (Fig....</p></li></ul>

Recommended

View more >