mechantronics - assignment 1

13
1 Table of Contents Introduction........................................................2 A Brief Outline of Vision Systems......................................... 2 New Technology Available In Vision Systems.................................2 How Omnidirectional Vision Works........................................2 Existing Applications of Omnidirectional Vision Systems........................4 Automated Manufacture with Omnidirectional Vision Systems....................5 Conclusion.........................................................5 References.........................................................6 Appendix 1.........................................................7 Appendix 2.........................................................8 Appendix 3.........................................................9 Kerrie Noble 200948192DM309 – Mechatronics Design and Applications 19/11/2011

Upload: kerrie-noble

Post on 10-Apr-2017

123 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: mechantronics - assignment 1

1

Table of ContentsIntroduction.......................................................................................................................................2

A Brief Outline of Vision Systems......................................................................................................2

New Technology Available In Vision Systems....................................................................................2

How Omnidirectional Vision Works..................................................................................................2

Existing Applications of Omnidirectional Vision Systems.................................................................4

Automated Manufacture with Omnidirectional Vision Systems.........................................................5

Conclusion.........................................................................................................................................5

References.........................................................................................................................................6

Appendix 1.........................................................................................................................................7

Appendix 2.........................................................................................................................................8

Appendix 3.........................................................................................................................................9

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 2: mechantronics - assignment 1

2

IntroductionIn many circumstances the reality of robotic capability still lags behind the science fiction portrayal. [1] C-3PO, the famous fictional character from the Star Wars saga, is a protocol droid who boasts that he is fluent ‘in over six million forms of communication’. [2] In my opinion this epitomises the visionary expectations of robotic and automated intelligence systems of the next century, as predicted by scientists from the 1970s. The reality is much different, however many improvements have been made. There are three main aspects present within any automated system, the strength relating to the physical payload the robot can move, the physical structure of a robot in relation to its payload and robotic intelligence. The area in which the most significant technological advances have been made within the last few years is the robotic intelligence field. The amount of manual interaction needed by an automated system, and also the ability of the system to think and carry out independent tasks has greatly improved and this is partly through the use of integrated vision systems and the development of an omnidirectional vision system. [1]

A Brief Outline of Vision SystemsMachine vision systems can be developed and refined to meet the user’s specific application requirements and so selecting the correct system for the operation required can be challenging. The wrong initial choice can result in insufficient inspections, decreased productivity and increased rejections as well as incurring a large financial cost to the company. Take for example a manufacturer of precision engineered engine parts for the aerospace industry. Each product must be inspected to ensure the dimensions, shape and formation of the part is correct. This requires image collection from the camera, which in turn will deploy a vision code to extract the edges of the component. The programme then requires an additional efficient code to determine the exact spacing and form of the component. Finally the programme must run an analysis phase to decide if the part is to be rejected or moved further along the production process. This involves hundreds of engineering hours and due to the rapid development of electronic systems available in PCs and cameras, duplicates which may be required to run a system become unavailable. Consequently this requires the vision system to be modified, retested and any advanced code to be debugged. [3] It is therefore no coincidence that the acceptance of machine vision systems has been insignificant until recent technological developments and a resulting upgrade of the system as a whole has emerged. [4]

New Technology Available In Vision SystemsAccording to a recently published report, by Reportlinker.com, recent advancements in machine vision technology, such as smart cameras and vision guided robotics, has increased the scope of the machine vision market for a wider application in the industrial and non-industrial sectors. One of the most rapidly advancing technologies is the technology behind 3D vision systems; more commonly termed omnidirectional vision systems within the industry. These systems are used as a tool in solving complex and challenging vision tasks within the automated manufacturing environment of many manufacturing lines in several different manufacturing industry sectors. Enabling 3D vision within a machine will enhance flexibility and robotic intelligence and consequently address some of the present issues which are obvious within current machine vision systems. [5]

The most effective way of realising 3D vision within a manufacturing robotics system is using two cameras which are placed side-by-side to produce stereo vision, producing an almost instantaneous estimate of distances to an object placed within a scene. Distance detection is a primary cue for detecting things which stand out in depth from the background. Stereo vision is also highly effective for segmenting objects and gauging their shape and size. [6] This is only possible due to the technical workings of the system.

How Omnidirectional Vision WorksAn omnidirectional vision system is composed of a CCD camera and a mirror which faces the camera. This produces a range of viewing angles, between 360° in the horizontal direction and 120° in the vertical direction.

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 3: mechantronics - assignment 1

3

There is therefore an inherent blind spot within the viewing range of this type of camera and so research and prototypes have been developed to evolve a fully functioning multidirectional vision system without this blind spot occurring. The diagram below illustrates the structure of an omnidirectional vision system which has the occurrence of a

blind spot, obtaining a real-time, and 360° x 240° omnidirectional images. The lens of the camera is fixed at a single view point behind a hyperbolic mirror. As can be seen on the diagram, there are two reflection mirrors. At the centre of the primary reflection mirror there is a small hole through which the camera shoots video information. After the primary reflection mirror there is a secondary reflection mirror. At the centre of the secondary reflection

mirror there is another small hole with an embedded wide-angle lens. Therefore as the camera is shooting, the information being gathered is undergoing two reflections.

The first reflection occurs in the wide-angle lens in the secondary reflection mirror, a second reflection then occurs in the small hole within the primary reflection mirror before the image is captured on the camera. The reflections which occur are there for the purpose of moving the position of the imaging point, the point at which the image being captured is formed on the lens of the camera. This structure has two imaging points, the first is between the wide-angle lens and the lens of the camera and the second is at the focus of the lens on the camera. This use of the reflection of light and the hyperbolic mirrors therefore eliminate the occurrence of a dead-angle, blind spot, before the primary reflection mirror, i.e. this ensures the image the camera is trying to capture is channelled towards the camera lens. This therefore makes the design of the mirror component critical.

The design of the catadioptric mirror for this omnidirectional vision system adopts the use of average angular resolution. The relationship between the point on the imaging plane and the incidence angle occurring on the mirror is linear; this can be seen in appendix 1. As clearly shown in appendix 1, the incidence beam, V1, of a light source, P, hits the primary reflection mirror and is reflected. This reflected beam of light, V2, strikes the secondary reflection mirror and is again reflected. The reflected beam of light, V3, then enters the camera lens with an incidence angle of ɸ1 and images on the camera unit. Through the use of equations outlined in appendix 2, the curvature for the primary and secondary reflection mirrors can be accurately calculated, giving the result shown in the diagram below.

Once the curvature of the mirrors has been established the next most important factor to consider is the lens combination of the imaging unit itself. The video information before the secondary reflection mirror is invisible within the current design of the vision system. In order to obtain information occurring before the secondary reflection mirror a wide-angle lens must be used. The wide angle lens and the camera lens together compose a combination lens device as shown in figure 1.1. The wide-angle lens is situated in front of the primary reflection mirror and in the secondary reflection mirror. The central axis of the camera lens, the central axis of the wide-angle lens, the primary reflection mirror and the secondary reflection mirror are position along the same axis line, as displayed in figure 1.3.

The projected picture through the hole in the primary reflection mirror images between the wide angle lens and the camera lens; this is known as the first imaging point. The projected picture coming through the camera lens images in the camera component. With all of

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

FIG.1.1. the diagrammatical layout of an omnidirectional vision system. [PIC1]

FIG.1.2. the designed curvature of the primary and secondary reflection mirrors. [PIC2]

FIG.1.3. A diagram showing the positioning of the camera, wide-angle lens and mirrors within the system. [PIC3]

Page 4: mechantronics - assignment 1

4

this information the lens diameter and focus of the camera can then be determined and relationships between entities derived, this is summarised in appendix 3.

This camera design enables 240° x 360° viewpoints, however placing two of these omnidirectional vision systems together in a back-to-back configuration can produce the desired 360° x 360° viewpoint with all images being captured in real-time. The corresponding camera design and its associated image disposal flow are shown below.

Each camera lens and wide-angle lens of the omnidirectional vision system captures images and automatically detects the centre of that image. Before the unwrapping of the omnidirectional image occurs, the central part of the image needs to be separated. This occurs to obtain the most real image possible; this also explains the use of a connector to join the two omnidirectional vision systems together. Both of the systems are of the same average angular resolution so as to incur no dead angle. The video cable and power cable exit through a hole within the connector. Each video cable from the separate omnidirectional vision systems connects to a video image access unit. As each camera within an omnidirectional vision system has a view range of 360° x 240° and the same average angular resolution in the vertical direction, it can realise image information fussed between two omnidirectional vision systems easily. The video access unit reads the image information from both vision systems separately, storing the images in storage space constantly and then splitting the circular video images captured by the combination lens. The separate images obtained by the two omnidirectional vision systems are unwrapped through the use of an unwrapping algorithm. After unwrapping the two separate images are stitched together and the result is shown in the picture below. [7] This type of imaging has been used in many applications for a few years now.

Existing Applications of Omnidirectional Vision SystemsOne of the most well-known robots to use a 3D visioning system is Honda’s humanoid robot, Asimo, which was first unveiled at the turn of the millennium. Asimo is now capable of identifying people, their postures and gestures and therefore moves independently in response. The camera mounted in

Asimo’s head is capable of detecting the movements of multiple objects while assessing distance and direction, much like the omnidirectional vision system outlined above. However, the intelligence which is most applicable to a manufacturing setting is that of environment recognition. Asimo is capable of assessing the immediate environment surrounding him and recognising the position of obstacles and avoiding them to prevent a collision, this includes people and immoveable objects. [8] From a safety point-of-view this would be a desirable quality for the manufacturing industry. Asimo however is not the only robot using this technology.

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

FIG.1.4. the diagram outlines the layout of a ‘back-to-back’ omnidirectional vision system and its image disposal flow. [PIC4]

FIG.1.5. this is an unwrapped image created using this vision system. [PIC5]

Page 5: mechantronics - assignment 1

5

3D vision systems are an integral part of the autonomous intelligence within an unmanned aerial vehicle. The GD170 is a sophisticated 2 axis, high resolution UAV vision system. This lightweight, stand-alone, gyro stabilized daylight observation system protects against wind loads, humidity and dust with an optically perfect Lexan dome. This vision system assists with applications such as damage assessment, search and rescue, traffic surveillance, coastal and border control, anti-terrorism operations, surveillance, anti-smuggling surveillance and infrastructure inspection. [9] Some of these operations are very similar to those that occur within an industrial manufacturing environment so it is easy to see how these types of vision system could be utilised within this sector.

A novel but interesting application using a type of 3D vision system is a robot which is putting Lego together. A video detailing the robot and how it works can be viewed at the following link; http://www.youtube.com/watch?v=n6tQiJq9pQA . This robot has been designed to complete repetitive and monotonous tasks, similar to what many people find themselves undertaking in the manufacturing industry. [10] Therefore the intelligence used here is easily transferrable to a manufacturing environment.

Automated Manufacture with Omnidirectional Vision SystemsAccording to the video link, the technology used within the robot at the International Robot Exhibition 2009 was a world first. Within manufacturing and engineering there is a need to enhance automation standards and this would be derived from the use of such systems as the one found in the robot building Lego. This intelligent technology is now making the emergence of robots and machinery with this technology embedded within the system, a superior alternative to human labour.

The ability of an omnidirectional vision system to deliver high accuracy while ensuring throughput on the production line is enabling the sought after process and quality control needed to produce lean and flexible manufacturing systems. The depth of vision and also the high quality image produced enables the 3D omnidirectional vision system to serve as an efficient quality control tool. [11]

Relating back to current applications of omnidirectional vision systems we can also quickly identify other applications within manufacturing where this type of technology could be beneficial. The robot from the International Robot Exhibition shows that people could be replaced easily with robots, however this is probably not desirable because of sociological reasons but monotonous, repetitive and dangerous jobs could be done using an automated system with omnidirectional vision, cutting cost and improving safety. As Asimo and UAVs have shown people recognition is a key aspect of this type of robotic intelligence and this could be useful within manufacturing in one of two ways; 1) the identification of people would dramatically improve safety as many of the conventional automated systems have electronic safety precautions in place which still have a certain amount of capacity in which an industrial accident could occur, 2) the ability to sense depth and distance while avoiding obstacles could be utilised within automated guided vehicles and work towards making this technology more efficient.

ConclusionIt is evident that the intelligence within machine vision systems is developing at a fast pace. An extensive amount of research into the use of 3D omnidirectional vision systems is currently taking place across the globe and is set to replace the old machine vision sensing techniques, e.g. the solid state camera. The incorporation of the newly developed 3D vision technology will enhance flexibility within the manufacturing process and robotic technologies whilst also improving upon some of the current issues with machine vision, including complexity, cost and length of time to obtain information.

The design outlined above is the latest research depicting the most efficient and effective way of producing a system displaying 360° x 360° 3D vision. It has been tried and tested but, as yet, has not been embedded and used for any significant applications within manufacturing. However, vision systems similar to the design depicted above have been used in applications such as Unmanned Aerial Vehicles and humanoid robotic systems. This technology can easily be placed within a manufacturing setting for applications such as quality control, safety and the improvement of efficiency within automated guided vehicles. It is easy to see that this

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 6: mechantronics - assignment 1

6

intelligence and technology could be of great use within the industry, however the beneficial results and improvements must be considered alongside the sociological aspects which could result in the replacement of human labour with that of an intelligent machine.

References[1] http://www.jimpinto.com/writings/robotics.html - accessed 22/10/2011[2] http://en.wikipedia.org/wiki/C-3PO - accessed 22/10/2011[3] http://www.controleng.com/search/search-single-display/inside-machines-embedded-machine-vision-systems-an-alternative-to-pc-vision-systems/922912c575.html - accessed 1/11/2011[4] http://www.frost.com/prod/servlet/report-brochure.pag?id=D366-01-00-00-00 – accessed 1/11/2011[5] http://www.digikey.com/us/en/techzone/sensors/resources/articles/five-senses-of-sensors-vision.html - accessed 1/11/2011 [6] http://www.tyzx.com/news/pdf/IEEE%20Computer%20article%20for%20post.pdf – accessed 8/11/2011[7] http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011[8] http://world.honda.com/ASIMO/technology/intelligence.html - accessed 17/11/2011 [9] http://www.uavvision.com/gimbals/gd170.html - accessed 17/11/2011[10] http://www.youtube.com/watch?v=n6tQiJq9pQA – accessed 17/11/2011 [PIC1] – http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011[PIC2] – http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011[PIC3] – http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011[PIC4] – http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011[PIC5] - http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - accessed 17/11/2011

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 7: mechantronics - assignment 1

7

Appendix 1 – The Relationship Between the Point on the Imaging Plane and The Angle of Incidence Occurring on the Mirror

By building a relationship between the distance from the pixel, P, to the spindle, Z, and the incidence angle ɸ

∅=a0 ∙ P+b0

a0 and b0 are arbitrary parameters.

If f is the focus of the camera unit, P is the distance from pixel to spindle Z, P2(t2F2) is the reflex point on the secondary reflection mirror. According to the imaging principle this gives;

P = ft2

F2

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 8: mechantronics - assignment 1

8

Source : http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - page 32/33 accessed 20/11/2011 Appendix 2 – The Curvature of the Primary and Secondary Reflection Mirrors

By substituting;

P = ft2

F2

into,

∅=a0 ∙ P+b0

We get;

∅=a0 ∙( ft 2

F2)+b0

According to catadioptric principle we get;

tan−1(t1

F1−s)=a0 ∙( f

t 2

F2)+b0

By using this equation above along with;

F12−2α F1−1=0

and,

F22−2 β F2−1=0

Then a numerical solution for F1 and F2 can be found, hence giving the appropriate curvature values for both reflection mirrors.

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 9: mechantronics - assignment 1

9

Source : http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - page 32/33 accessed 20/11/2011 Appendix 3 – The Relationship Between Lens Diameter and Focus

According to the lens imaging equation we get;

1f 1

= 1s1

+ 1s2

1f 2

= 1s3

+ 1s4

d=s2+s3

By taking the combination lens focus into account we get;

1f 3

=( f 1+f 2−d )

f 1 f 2

Also the lens diameter, D, has a magnification of;

n=Df 3

In order for both entities to have the same average angle, the design must use the following equation;

n=Df 3

=2∅ 1MAX

∅ 1MAX is the maximum angle between the secondary reflected light, V2 and the Z axis.

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011

Page 10: mechantronics - assignment 1

10

Source : http://www.intechopen.com/articles/show/title/design-of-stereo-omni-directional-vision-sensors-with-full-sphere-view-and-without-dead-angle - page 34 accessed 20/11/2011

Kerrie Noble 200948192 DM309 – Mechatronics Design and Applications 19/11/2011