novel user interface for semi-automatic parking …

10
F2006D130 NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2 Jung, Ho Gi, 1 Kim, Dong Suk * , 1 Yoon, Pal Joo, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University, Republic of Korea KEYWORDS – Automatic parking assistance system, target position designation, drag&drop user interface, computer vision, driver convenience system ABSTRACT – This paper proposes a novel user interface for semi-automatic parking assistance system, which automates steering handling during parking operation. In spite of recent progresses of automatic target position designation method, manual designation is supposed to have two important roles. First, manual designation can be used to refine the target position established by automatic designation method. Second, manual designation is necessary for the backup of automatic designation method. Proposed user interface provides an easy-to-use manual designation method based on drag&drop concept. Target position is depicted as a rectangle in touch screen based HMI(Human Machine Interface). Driver can move the rectangle by dragging the inside of rectangle. Driver can rotate the rectangle by dragging the outside of rectangle. We compare the proposed method with multiple-arrow based method, which provides several arrow buttons to move and rotate target position, by measuring total operation time and clicking number. We can conclude that proposed method shortens the operation time and reduces the clicking number. TECHICAL PAPER – INTRODUCTION Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. Because recently driver’s interest about parking assist system increases drastically, car manufacturers and component providers are developing various kinds of parking assist systems (1)(2). Fig. 1 shows the configuration of semi- automatic parking system currently being developed. The system consists of six components: EPS(Electric Power Steering) for active steering, vision sensor acquiring rear-view image, ultra-sonic sensors measuring distances to nearby side/rear obstacles, touch screen based HMI(Human Machine Interface) providing information to driver and receiving command from driver, EPB(Electric Parking Braking) automatically activating parking brake, and processing computer. Algorithms running on the processing computer consist of three components: target parking position designation, and path tracker that continuously estimates current position and controls steering system to achieve the planned path. There are many kinds of methods for the target parking position designation: manual designation method, range sensor based method, GPS based method and vision based method. Prius IPAS(Intelligent Parking Assist System), mass-produced by Toyota and AISIN AEIKI in 2003, is the example of manual designation method (3). Range sensor based methods are mainly used for parallel parking. The most common range sensor is ultra-sonic sensor (4)(5). There are researches using laser scanner (6)(7) or mm-wave radar (8)(9). GPS based method makes path-plan then tracks it with GPS and local digital map (10). Recently, vision based method attracts more and more interests because vision sensor is already installed and

Upload: others

Post on 05-Dec-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

F2006D130 NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2Jung, Ho Gi, 1Kim, Dong Suk*, 1Yoon, Pal Joo, 2Kim, Jaihie 1MANDO Corporation, Republic of Korea, 2Yonsei University, Republic of Korea KEYWORDS – Automatic parking assistance system, target position designation, drag&drop user interface, computer vision, driver convenience system ABSTRACT – This paper proposes a novel user interface for semi-automatic parking assistance system, which automates steering handling during parking operation. In spite of recent progresses of automatic target position designation method, manual designation is supposed to have two important roles. First, manual designation can be used to refine the target position established by automatic designation method. Second, manual designation is necessary for the backup of automatic designation method. Proposed user interface provides an easy-to-use manual designation method based on drag&drop concept. Target position is depicted as a rectangle in touch screen based HMI(Human Machine Interface). Driver can move the rectangle by dragging the inside of rectangle. Driver can rotate the rectangle by dragging the outside of rectangle. We compare the proposed method with multiple-arrow based method, which provides several arrow buttons to move and rotate target position, by measuring total operation time and clicking number. We can conclude that proposed method shortens the operation time and reduces the clicking number. TECHICAL PAPER – INTRODUCTION Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. Because recently driver’s interest about parking assist system increases drastically, car manufacturers and component providers are developing various kinds of parking assist systems (1)(2). Fig. 1 shows the configuration of semi-automatic parking system currently being developed. The system consists of six components: EPS(Electric Power Steering) for active steering, vision sensor acquiring rear-view image, ultra-sonic sensors measuring distances to nearby side/rear obstacles, touch screen based HMI(Human Machine Interface) providing information to driver and receiving command from driver, EPB(Electric Parking Braking) automatically activating parking brake, and processing computer. Algorithms running on the processing computer consist of three components: target parking position designation, and path tracker that continuously estimates current position and controls steering system to achieve the planned path. There are many kinds of methods for the target parking position designation: manual designation method, range sensor based method, GPS based method and vision based method. Prius IPAS(Intelligent Parking Assist System), mass-produced by Toyota and AISIN AEIKI in 2003, is the example of manual designation method (3). Range sensor based methods are mainly used for parallel parking. The most common range sensor is ultra-sonic sensor (4)(5). There are researches using laser scanner (6)(7) or mm-wave radar (8)(9). GPS based method makes path-plan then tracks it with GPS and local digital map (10). Recently, vision based method attracts more and more interests because vision sensor is already installed and

Page 2: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

inexpensive compared to mm-wave radar. Marking based methods establish target position by recognizing parking slot markings (11)(12). Object based methods establish target position by recognizing adjacent vehicles (13)(14).

Fig. 1. System configuration of semi-automatic parking system

In spite of rapid progress of automatic target position designation method, manual designation is supposed to have two important roles. First, manual designation can be used to refine the target position established by automatic designation methods. In general, parking system provides a rear view image to help driver understand on-going parking operation. Fig. 2 shows the typically installed rear-view camera and user interface. Furthermore, system needs to receive driver’s confirmation about the automatically established target position. At the moment, driver is able to naturally refine the target position with manual designation method. Second, manual designation is necessary for the backup of automatic designation method. Because sensors used in automatic designation method have their own weakness, the recognition result cannot be always perfect. If system provides driver a chance to modify the target position by manual designation method, faults of automatic designation method can be corrected without serious inconvenience.

(a) rear view camera (b) touch screen based HMI

Fig. 2. Typical installation of camera and HMI

This paper proposes a novel manual designation method to enhance driver’s comfort by shortening the operation time and eliminating repetitive operation. Basic idea is based on

Page 3: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

drag&drop operation, which is familiar with PC users. Target position is depicted as a rectangle in touch screen based HMI. Driver can move the rectangle by dragging the inside of rectangle. Driver can rotate the rectangle by dragging the outside of rectangle. To verify the feasibility of this method, experiments with multiple participants are conducted. In the experiment, we consider two kinds of views, i.e. distorted view and bird’s eye view, and two kinds of situations, i.e. garage parking and parallel parking. We compare the proposed method with multiple-arrow based method, which provides several arrow buttons to move target position, by measuring total operation time and clicking number. We can conclude that proposed method shortens the operation time and reduces the clicking number. DRAG&DROP BASED USER INTERFACE Three Coordinate Systems Proposed system compensates the fisheye lens distortion of input image and constructs bird’s eye view image using homography. Installed rear view camera uses fisheye lens, or wide-angle lens, to cover wide FOV (Field Of View) during parking procedure. As shown in Fig. 3, input image through fisheye lens can capture wide range of rear scene but inevitably includes severe distortion. It is well known that the major factor of fisheye lens distortion is radial distortion, which is defined in terms of the distance from the image centre (15). Modelling the radial distortion in 5th order polynomial using Caltech calibration toolbox and approximating its inverse mapping by 5th order polynomial, proposed system acquires undistorted image as shown in Fig. 3 (16). Homography, which defines one-to-one correspondence between coordinate in undistorted image and coordinate in bird’s eye view image, can be calculated from the height and angle of camera with respect to the ground surface (12). Bird’s eye view is the virtual image taken from the sky assuming all objects are attached onto the ground surface. General pinhole camera model causes perspective distortion, by which the size of object image is changing according to the distance from camera. Contrarily, because bird’s eye view image eliminates the perspective distortion of objects attached onto the ground surface, it is suitable for the recognition of objects painted on the ground surface. Final image of Fig. 3 is the bird’s eye view image of the undistorted image.

Fig. 3. Construction procedure of bird’s eye view image Drag&drop concept Target position is a rectangle in world coordinate system, or bird’s eye view image coordinate system. Target position is managed by its 2D location (Xw,Zw) and angle φ with respect to Xw-axis. The width and length of target position rectangle are determined based on the ego-

Page 4: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

vehicle’s width and length. With the radial distortion model and homography, a point in bird’s eye view image coordinate system is corresponding to a point in distorted image coordinate system, or input image coordinate system. Therefore, by converting every coordinates into bird’s eye view image coordinate system, we can implement any kinds of operations in one coordinate system uniformly. Target position rectangle and user input are treated in bird’s eye view image coordinate system, and then are converted to proper coordinate system according to the display mode. Target position rectangle displayed in touch screen based HMI acts as a cursor during driver establishes target position. The inside region of rectangular target position is used as a moving cursor. Driver can move the location of target position by dragging the inside as shown in Fig. 4(a). The outside region of rectangular target position is used as a rotating cursor. Driver can rotate the target position, or change the angle, by dragging the outside as shown in Fig. 4(b). Three kinds of operations are needed: 1) method determining whether a driver’s input, i.e. pointing point, is in the target position rectangle or not, 2) with consecutive two driver’s inputs, calculation of translation transformation, and 3) with consecutive two driver’s inputs, calculation of rotation transformation.

(a) Moving by dragging the inside of rectangle

(b) Rotating by dragging the outside of rectangle

Fig. 4. Target position rectangle as moving and rotating cursor

Mode Selection Whether a point is in a rectangle or not can be determined by checking if the point is at the same side of four rectangle-sides in rotating direction. In this application, relative location

Page 5: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

between four corner points cannot be determined because the rectangle can be rotated. Only the order between four corner points is confirmed. C1, C2, C3, C4 are four corner points of a rectangle in rotating direction and T is the user pointing point. We can define a cross-product between two vectors, e.g. C1C2 and C1T as depicted in Fig. 5. If all z-components of four cross-products have the same sign, then we can confirm that the point T is located in the rectangle as shown in Fig. 6(a). Contrarily, if any z-components of four cross-products has different sign, then we can confirm that the point T is located out of the rectangle as shown in Fig. 6(b).

Fig. 5. Cross-product between a rectangle-side and corner-pointing

(a) Four cross-product have the same direction (b) One cross-product has different direction

Fig. 6. Determining whether a point is in a rectangle or not

Translation of Target Position Translation transformation is equally applied to every points of target rectangle. Therefore, new target position can be determined by adding the difference vector between two consecutive user input points, P1, P2 , to the current target position as shown in Fig. 7(a). Rotation of Target Position Rotation transformation with respect to centre point C is equally applied to every points of target rectangle. Therefore, new target position can be determined by rotating current target position with respect to C by the between-angle θ of two consecutive user input points as shown in Fig. 7(b).

Page 6: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

(a) translation vector by difference vector (b) rotation angle by between-angle

Fig. 7. Transformation calculation

EXPERIMENTAL RESULTS To verify the efficiency of the proposed method, we measure the operation time and clicking number, and then compare drag&drop based method with multiple-arrow based method. For garage parking, it is observed that operation time is reduced by 17.6% and clicking number is reduced by 64.2%. For parallel parking, it is observed that operation time is reduced by 29.4% and clicking number is reduced by 75.1%. Experiment Method Before test, we briefly explain the operation instruction of two methods: drag&drop based method, and multiple-arrow based method. The multiple-arrow based method is similar to the user interface of Prius first generation. There are 10 arrow buttons, 8 for translation and 2 for rotation. Every participant establishes target positions for 8 situations by both methods. 4 situations are garage parking and the other 4 situations are parallel parking. For each 4 situations, 2 situations are tested in bird’s eye view image and the other 2 situations are tested in distorted image. Fig. 8 ~ 11 show the situation 1~8. Total 50 volunteers participate in the test. Average age is 30.1 in the range of 22 ~ 42. 41 participants are male and 9 participants are female. Every participant conducts the test only once. The test order between drag&drop based method and multiple-arrow based method are mixed randomly.

(a) situation 1 with drag&drop method (b) situation 1 with arrows method

Page 7: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

(c) situation 2 with drag&drop method (b) situation 2 with arrows method

Fig. 8. Garage parking cases in bird’s eye view image

(a) situation 3 with drag&drop method (b) situation 3 with arrows method

(c) situation 4 with drag&drop method (d) situation 4 with arrows method

Fig. 9. Garage parking cases in distorted image

(a) situation 5 with drag&drop method (b) situation 5 with arrows method

(c) situation 6 with drag&drop method (d) situation 6 with arrows method

Fig. 10. Parallel parking cases in bird’s eye view image

Page 8: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

(a) situation 7 with drag&drop method (b) situation 7 with arrows method

(c) situation 8 with drag&drop method (d) situation 8 with arrow method

Fig. 11. Parallel parking cases in distorted image Test Result Table1 shows the operation time average of 4 garage parking situations. It is observed that drag&drop based method reduces the operation time by 17.6%. Table2 shows the operation time average of 4 parallel parking situations. It is observed that drag&drop based method reduces the operation time by 29.4%.

Table1. Operation time average of garage parking situations Operation time average

Situation No. Drag&Drop(A) Multiple arrow(B) Enhancement, ( )B AB− (%)

1 11.9 17.2 30.62 11.6 12.7 9.13 11.7 16.1 27.54 11.2 11.5 3.1

Average 17.6

Table2. Operation time average of parallel parking situations Operation time average

Situation No. Drag&Drop(A) Multiple arrow(B) Enhancement, ( )B AB− (%)

5 12.4 16.7 25.36 12.5 18.8 33.47 12.8 15.7 18.48 11.2 18.9 40.5

Average 29.4 Table3 shows the clicking number average of 4 garage parking situations. It is observed that drag&drop based method reduces the clicking number by 64.2%. Table4 shows the clicking number average of 4 parallel parking situations. It is observed that drag&drop based method

Page 9: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

reduces the clicking number by 75.1%. Reduction of clicking number means the reduction of repetitive operation. Many participants evaluate the point as the most importance advantage of proposed drag&drop method because repetitive clicking operation is really tedious job.

Table3. Clicking number average of garage parking situations Clicking number average

Situation No. Drag&Drop(A) Multiple arrow(B) Enhancement, ( )B AB− (%)

1 7.1 24.0 70.52 6.4 16.7 61.83 6.6 20.9 68.54 6.6 15.0 56.1

Average 64.2

Table4. Clicking number average of parallel parking situations Clicking number average

Situation No. Drag&Drop(A) Multiple arrow(B) Enhancement, ( )B AB− (%)

5 7.9 27.1 70.86 6.5 35.1 81.67 9.1 27.2 66.58 7.2 38.6 81.5

Average 75.1 It is noticeable that there is no tendency with respect to the view. There are no definite difference between distorted image cases and bird’s eye view image cases. However, for parallel parking situations in bird’s eye view image, many participants complain the low quality of bird’s eye view image. To make the method more practical, bird’s eye view image seems to be enhanced. Finally, we find out that to implement drag&drop method successfully, the sensitivity of touch screen should be improved because pushing force drops generally during dragging operation. CONCLUSION In this paper, we propose a novel manual target designation method based on drag&drop concept. Target position is displayed as a rectangle, and driver can move the target by dragging the inside and can rotate the target by dragging the outside. Through experiments, we confirm that the proposed method reduces operation time and clicking number. Major contribution is that with the proposed method driver can quickly establish target position and avoid tedious repetitive clicking operations. Future works are enhancing the image quality of bird’s eye view for parallel parking and enhancing the sensitivity of touch screen. REFERENCES (1) Richard Bishop, “Intelligent Vehicle Technology and Trends”, Artech House Pub.,

2005 (2) Randy Frank, “Sensing in the Ultimately Safe Vehicle”, Society of Automotive

Engineers, SAE Paper No.: 2004-21-0055, 2004

Page 10: NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING …

(3) Masayuki Furutani, “Obstacle Detection Systems for Vehicle Safety”, Society of Automotive Engineers, SAE Paper No.: 2004-21-0057, 2004

(4) Wei Chia Lee and Torsten Bertram, “Driver Centered Design of an Advanced Parking Assistance”, 5th European Congress and Exhibition on ITS and Services, 2005

(5) J Pohl, M Sethsson, P Degerman, and J Larsson, “A semi-automated parallel parking system for passenger cars”, Proc. ImechE, Vol. 220, Part D: J. Automobile Engineering, 2006

(6) Alexander Schanz, Andreas Spieker, and Klus-Dieter Kuhnert, “Autonomous Parking in Subterranean Garage – A look at the Position Estimation -”, IEEE Intelligent Vehicles Symposium 2003, pages: 253-258, 2003

(7) Christopher Tay Meng Keat, Cédric Pradalier, and Christian Laugier, “Vehicle Detection And Car Park Mapping Using Laser Scanner”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), pages: 2054-2060, 2005

(8) Stefan Görner and Hermann Rohling, “Parking Lot Detection with 24GHz Radar Sensor”, 3rd International Workshop on Intelligent Transportation (WIT 2006), 2006

(9) M. Klotz, and H. Rohling, “A high range resolution radar system network for parking aid applications”, 5th International Conference on Radar Systems, 1999

(10) Massaki Wada, Kang Sup Yoon, and Hideki Hashimoto, “Development of Advanced Parking Assistance System”, IEEE Transaction on Industrial Electronics, Vol. 50, No. 1, pages: 4-17, 2003

(11) Jin Xu, Guang Chen, and Ming Xie, “Vision-Guided Automatic Parking for Smart Car”, IEEE Intelligent Vehicles Symposium 2000, pages:725-730, 2000

(12) H. G. Jung, D. S. Kim, P. J. Yoon, and J. H. Kim, “3D Vision System for the Recognition of Free Parking Site Location”, International Journal of Automotive Technology, Vol. 7, No. 3, pages: 361-367, 2006

(13) Nico Kaempchen, Uwe Franke, and Rainer Ott, “Stereo vision based pose estimation of parking lots using 3D vehicle models”, IEEE Intelligent Vehicles Sysmposium 2002, Vol. 2, pages: 459-464, 2002

(14) C. Vestri, S. Bougnoux, R. Bendahan, K. Fintzel, S. Wybo, F.Abad, and T. Kakinami, “Evalution of a Point Tracking Vision System for Parking Assistance”, 12th World Congress on ITS, 2005

(15) J. Salvi, X. Armangué, and J. Batlle, “A comparative review of camera calibration methods with accuracy evaluation”, Pattern Recognition 35(2002) 1617-1635, 2002

(16) J. Y. Bouguet, “Camera Calibration Toolbox for Matlab”, http://www.vision.caltech.edu/bouguetj/calib_doc/index.html