[ieee 34th applied imagery and pattern recognition workshop (aipr'05) - washington, dc, usa...
TRANSCRIPT
![Page 1: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/1.jpg)
Automatic Inspection System Using Machine Vision
Umar Shahbaz Khan, Javaid Iqbal, Mahmood A. Khan
Department of Mechatronics
College of E&ME, National University of
Science and Technology Rawalpindi
Pakistan 46000
E-mail [email protected],[email protected],[email protected]
Abstract
Man from the beginning of time, tried to automatethings for comfort, accuracy, precision and speed.
Technology advanced from manual to mechanical and
then from mechanical to automatic. Vision basedapplications are the products of the future. Machine
Vision Systems integrate electronic components with
software systems to imitate a variety of humanfunctions. This paper describes current research on a
vision based inspection system. A computer using a
camera as an eye has replaced the manual inspection system. The camera is mounted on a conveyor belt.
The main objective is to inspect for defects, instead of
using complicated filters like edge enhancement, andcorrelation etc a very simple technique has been
implemented. Since the objects are moving over theconveyor belt so time is a factor that should be
counted for. Using filters or correlation procedures
give better results but consume a lot of time. Thetechnique discussed in this paper inspects on the basic
pixel level. It checks on the basis of size, shape, color
and dimensions. We have implemented it on fiveapplications and the results achieved were good
enough to prove that the algorithm works as desired.
Table 1. Table of comparison betweenPC Based and Smart Camera Machine Vision
1. Introduction
Vision based systems have been implemented in
the industrial sector all over the world [15].
Previously, inspecting the modules was a time-
consuming manual process that gave inconsistent
results between operators. There are hundreds and
thousands of different applications and many more are
being developed or improved day by day [16]. There
are two types of machine visions used. One is PC
based and the other is Smart Camera. The comparison
between the two systems is shown in the table. We
have implemented the PC-based machine vision
system for its better flexibility, more options of
functionality and greater performance. Though this
system has poor ruggedness and needs computer
skills. But the software and GUI is made simple and
user friendly.
Parameters PC-basedSmart
Camera
Flexibility Excellent Poor
Ruggedness Poor Excellent
Size
Multiple-box
system
Imaging head
can be very
small
All-in-one box
Not
necessarily
very small
Functionality Expandable Limited
Performance Expandable Limited
Ease of useNeeds computer
skill
No computer
skill needed
A conveyor belt is designed to carry objects under
a camera for visual inspection. The camera is
interfaced with a computer. The system picks frames
and processes the data. The mechanical portion is the
conveyor belt which is made from sheets of aluminum.
The conveyor is light and portable. Two motors, a DC
motor used to run the belt [6], and a stepper motor
used to move the separating rod [8] & [14], are
attached. The electrical portion consists of power
supplies, electronic circuits and parallel port for
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE
![Page 2: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/2.jpg)
interfacing with computer [6] & [10]. The software is
based on image processing techniques [1] & [2].
Instead of complicating the process with too many
calculations the software follows a very simple
approach which takes less processing time and gives
better accuracy. The system processed different
objects and separated them on the basis of size, shape,
color, missing parts and dimensions.
Size, Shape and color of every object is unique.
These three things are the criteria which enables a
human eye to differentiate between two objects. Area
is another factor. The computer does not know what it
is looking at unless it is told so. The frames are picked
up by the computer in the form of pixels matrix. A
specific set of pixels may describe an object area. So
given the proper threshold value a group of pixels may
identify an objects area [3]. The length and width of
the object can also be calculated by keeping a fix
distance between the object and the camera and then
solving by pixel to length ratio.
Once the computer specifies if an object is to be
accepted or rejected it controls a lever attached to the
stepper motor via parallel port. The lever directs the
object to the specified tray by blocking its path to the
other tray. This simple technique is applied instead of
a heavy duty robotic arm or moving tray system.
The technique has been implemented on number of
objects i.e. Bullets, Resistors, Capacitors, Led’s, Clips,
Cigarette and Rubber. Bullets were differentiated on the basis of their
size. Resistors and capacitors were differentiated on the basis of missing parts (e.g. their legs were broken). The clips were differentiated on the basis of their improper shape. Cigarette, Led’s and rubber were differentiated on the basis of color and size.
2. Mechanical Design
The mechanical part is the conveyor system itself.
The conveyor is not a single unit all the parts are
attachable and can be detached easily.
2.1 Main Body
Sheets of aluminum have been folded into the
conveyor walls and base of the main body. Two
rollers along with bearings and a shaft capable of
rotating are attached to the extreme ends of the belt.
On one side of the belt a DC motor has been attached,
which rotates the rollers. A belt is then fitted over the
two rollers, tight enough to get a firm grip. A
compartment has been made under the belt where the
electrical circuitry is fitted, which helps in its safe
transportation.
2.2 Camera Attachment
An L shaped rod is attached to the middle of the
conveyor which is at the height of 30cm from the base.
The camera is attached to the rod which can be
adjusted to any position over the belt and will cover
whole area of the conveyor.
2.3 Feeder Tray
The input feeder tray is attached to one end of the
conveyor. Two levers attached can be adjusted
accordingly to the size of the object under inspection.
The feeder tray is inclined at an angle of 30 degree to
the horizontal so that the objects may slide easily onto
the belt.
2.4 Output Trays
There are two output trays attached to the other
side of the belt. Both trays are made by folding
aluminum sheets. The folded portion also serves as a
separating wall. The trays are small in size and
inclined at an angle of -50 degrees so that the object
may slide easily when it comes off the conveyor belt.
2.5 Separating Rod
A stepper motor is attached to the base near the
output trays. A separating rod is attached to the top of
the stepper motor. Right and left controlled motion of
the stepper motor moves the separating rod
accordingly. When the rod turns towards right it
blocks the path to the right tray and the objects will go
to the left tray and vice versa.
3. Electronic Control
The system involves two motors. Once is a 12V
1.2 Amp DC motor used to drive the belt and the other
one is a 6V 1.2 Amp Stepper motor used to drive the
Separating Rod. The electronic circuitry and power
supplies are fitted inside the conveyor belt and
insulated to protect from short circuit.
A worm gear is installed with the 12 V DC motor
which enables it to give sufficient torque to move a
weight up to 5Kgs. The motor is driven by L-298 IC.
It has three control pins, one is used to enable the
motor and the other two are used to set the direction of
the motor. Since the motor is supposed to move in one
direction only so its direction pins are already given
the desired voltages. The enable pin is controlled by
the software to run or stop the motor.
To drive the stepper motor, instead of using an IC,
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE
![Page 3: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/3.jpg)
four transistors are used as switches. By periodic
switching these transistors the stepper motor is made
to turn. Pulses for switching are controlled through
software.
4. Computer Interface
Camera and the Stepper Motor are connected to the
computer.
Figure 1. Diagram of the System
4.1 Camera Interface
An analogue camera model 801C is interfaced to
the computer via pixel view TV Tuner card. The
software interacts with TV Tuner card drivers to get
the frames from the camera. The programming is done
in Visual Basic. To get an image from the camera an
active-X file “VIDEO.OCX” is used [11]. This file
automatically interacts with the drivers of the video
card and displays the video on the screen.
4.2 Stepper Motor Interface
The stepper motor is connected to the computer via
parallel port. An Octal 3-state buffer 74LS240 is used
to protect the parallel port from the back current.
Pulses are sent through software. To get control of the
parallel port in windows XP, a Dynamic Link Library
file “inpout32.dll” is used. This file is placed in the
system32 folder of the operating system. A module
declared in the program introduces two commands
“inp” to get data from the port and “out” to send data
to the parallel port.
5. Methodology
The programming environment used is Visual
Basic. It is easy to use, simple to understand and it has
a good graphical user interface. Other programming
languages can also be used like Turbo C, Visual C and
Matlab. Turbo C does not have an attractive GUI,
visual C is complicated and the programming is not so
user friendly, on the other hand Matlab is simple and
user friendly but it is slow. The only drawback VB has
over VC is its slow speed. The programming portion is
divided into the following parts.
5.1 Capturing Video
The first step in the programming is capturing the
video. An OCX file “video.ocx” is placed in the
system32 folder of the operating system [11]. This
provides us with components that can be added to the
Visual Basic environment. The file automatically
interacts with the drivers of the video device and
displays the image on the screen. The command
“videocapture1.capture = true” starts the video.
Moreover video.ocx also provides many other options
like selection of source, resolution, format, height,
width etc.
Camera
5.2 Converting Video into Matrix
Once the video as obtained, the next step is, to
convert it into a matrix for calculations and application
of filters. A module is introduced for this purpose.
This module uses the command “RtlMoveMemory”
from the “kernel32” Library [12]. The
RtlMoveMemory command gets the data from the
frame and puts it in a one dimension array [13]. The
data in the frame is the red, green and blue (RGB)
values of each pixel [3]. So the RGB values of all
pixels are stored in an order in the one dimensional
matrix. This matrix is divided into three other matrices
of red, green and blue to get a complete set of pixel
matrix of the same color for the whole frame. Using
the three color matrices and the formula for grey scale
another grey scale matrix is created [1] & [2]. The
gray scale formula is as:
Grey = (0.3 * blue) + (0.59 * green) + (0.11 * red)
The video is taken of the resolution 160 by 120.
Hence a matrix of these dimensions is formed. All the
calculations and filters are applied on this matrix so a
fast processing system is required.
Stepper Motor
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE
![Page 4: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/4.jpg)
5.3 Improving Quality of Image
The grey scale matrix is originally the image in
quantitative form. It is necessary that the quality of the
image be improved so that the results provided are
precise and accurate. Two techniques have been used
in this project. One is the Histogram Analysis and the
other is setting Threshold values.
5.3.1 Histogram. A histogram counts and graphs the
total number of pixels at each grayscale level. It
determines if the overall intensity in the image is
suitable for inspection. The histogram shows if the
image is too dark i.e. underexposed or too bright i.e.
saturation. Using histogram analysis the image
acquisition conditions can be adjusted to get a higher
quality image.
5.3.2 Setting Threshold. Thresholding is used to
select a range of pixel values in grayscale and color
images that separate the object under consideration
from the background. Using this technique the image
is converted into a binary image with pixel value of 0
and 1. All the values in a certain range or threshold
interval are set to 1 and the other values are set to 0.
5.4 Checking for DefectsAn object can be differentiated on the basis of its
color, size, shape and dimensions.
5.4.1 Particle Measurement. To differentiate on the
basis of size and shape the grey scale matrix is
considered for particle measurement. Initially when
there is nothing on the belt a black background gives a
grey scale value near zero. If every frame has the same
value near zero then it means that there is nothing on
the belt. If a white object of a square shape is placed
on the belt. Then the matrix gives a value near 255 for
a specific set of pixels. It means that those pixels are
basically describing the white box on the black
background. If we use the process of thresholding then
the background will have a value of zero and the
object pixels will have a value of 1. Hence we get a
specific area in our matrix of the value 1. This area is
known as a “Blob”. The number of pixels in the blob
is one method of differentiating the object. Because if
an object that is smaller in size and has the same color,
the number of pixels will be less than the original sum.
If an object is of greater size than its number of pixels
will be larger than the desired result. In this way
objects can be differentiated on the basis of their size
as shown in Figure 2.
Size = 6 Pixels
Length = 2 Pixels
Width = 3 Pixels
Size = 12 Pixels
Length = 3 Pixels
Width = 4 Pixels
5.4.2 Dimension Comparison. It is also possible that
an object is of different shape but carries the same
number of pixels as the previous object. Suppose there
is a white square box comprising of hundred pixels
and a white rectangle also comprising of hundred
pixels than the previous code would not be able to
differentiate between the two objects. In order to solve
this problem the software is given the dimensions of
the object. The software is told that the square box is
ten pixels in length and ten pixels in width. So the
software not only checks for the number of pixels it
also checks the dimensions. Obviously the rectangle
will not fulfill the criteria of having ten pixels length
and ten pixels width so it will be rejected. In this way
the shape of the object can be catered.
Figure 2. Objects of same Shape but different size
Size = 6 Pixels
Length = 2 Pixels
Width = 3 Pixels
Size = 6 Pixels
Length = 1 Pixels
Width = 6 Pixels
Figure 3. Objects of same No. of Pixels butdifferent shape
5.4.3 Color Comparison. It is also possible that a
square of same size but different color passes over the
belt. The first two codes will accept the square if it
also gives a grey scale value of the required level, but
the object is defective. This is where the red, green,
blue matrices are considered. In case of a red object,
its red matrix shows a high value than usual. So if the
red value is greater than a specified threshold value the
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE
![Page 5: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/5.jpg)
object is considered to be red. Moreover another
comparison between white and red colors shows that
in white the green matrix values are always greater
than the red matrix while in red it’s the vice versa.
Considering these values the color differentiation can
be sorted out. Hence the matrices values are compared
with the color threshold values and a decision is made.
Figure 4 b. Calculating Dimensions of object
Figure 4 a. Pixels to length ratio
Figure 5. Flow Chart of Code
5.4.4 Dimensions Calculation. The system can also
calculate the length and width of an object. Suppose
the total area of the frame is 10000 pixels that is 100
pixels length and 100 pixels width. While the area
grasped by the image in real time is of 10cm length
and 10cm width. So we can find a relation between the
number of pixels and length of original area. As
hundred pixels represent ten centimeters, ten pixels
represent one centimeter and one pixel would
represent one millimeter. An object of twenty pixels
length and twenty pixels width would be of 2 cm
(20mm) length and 2 cm (20mm) width. Using this
technique the dimensions of an object can be found.
These dimensions are then compared to the original
dimensions of the object and the required decision is
made. Figure 4 a, refers to the pixel to length ratio of
the video on screen and total area covered by the
camera. Figure 4 b, shows the original dimensions of
an object which is 30 x 100 pixels.
The code first checks on the particle measurement
basis. After that the code has two options. If the object
is to be checked for color the code will follow path 2
and first check on the basis of dimensions then check
on the basis of color, else the code will follow path 1
and find the dimensions of the object and compare it
with the original dimensions. Path decision is made by
the user. The complete process is shown in Fig 5.
Reject
yes
yes
yes
yes
no
no
no
noCheck on the
basis of No. of
pixels for size
START
Calculate
Dimensions
Accept
END
Select Solution
1 or 2
Check on
the basis of
dimensions
for shape
Check
for color
Compare
to original
dimension
1 2
100
30
Length = 30 x .0875 = 2.62 cm
Width = 100 * .0781 = 7.81 cm
Original length and width of Object
120
16012.5
10.5
Length, width in terms
Of pixels from cameraReal time
length
XL = 10.5 / 120 = .0875
XW = 12.5 / 160 =.0781
Multiplying factor for
length and width
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE
![Page 6: [IEEE 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Washington, DC, USA (19-21 Oct. 2005)] 34th Applied Imagery and Pattern Recognition Workshop (AIPR'05) - Automatic](https://reader035.vdocuments.site/reader035/viewer/2022080501/5750a7821a28abcf0cc198fb/html5/thumbnails/6.jpg)
5.5 Graphical User Interface
Figure 6. A screenshot of the GUI
The GUI of the program is made user friendly.
Figure 6 is one of the applications implemented on
bullets. The “Color” tells color of the object passing
under the camera. Bullet length and diameter are also
indicated. “Object” shows which type of bullet within
the database is passing under the camera. The output is
a green light which is just an indication that the object
is accepted. If a defective object comes in view the
output changes to red light. There are four option
buttons introduced, that selects the object from the
database to be separated. The user can switch to
another object in runtime.
6. Results
We have inspected objects like bullets, capacitors,
resistors, clips, rubbers and cigarette for their size,
shape, color, missing parts and dimensions. Instead of
using complicated filters like correlation and edge
enhancement we analyzed the objects following the
simplest and fastest approach possible, that is, to
inspect their grey scale and RGB pixel values. This
approach not only saves time but it also proves that
machine vision is possible without application of
complicated filters.
7. Conclusion
Short program, less calculations and simple GUI
has made the fault detection procedure efficient and
fast. The program can easily be altered for other
objects. Including new objects in the database is also
simple and not too time consuming. The complete
project is cheap portable and the adjustments are
simple. It follows a rather simple approach than the
existing complicated techniques being used. The entire
process is automatic and does not need manual
control. The main scope of the paper was to recognize
an object on the basis of size, shape, dimensions and
color. If the values are set to one specific environment
then the accuracy is round about 95%.
7. References
[1]Rafael C. Gonzalez, “Digital Image Processing.
[2]John C. Russ, “The Image Processing Handbook”.
[3]Kenneth R. Castleman, “Digital Image Processing”.
[4]Al Bovik, “Handbook of Image and Video Processing”.
[5] K. Mikolajczk, A. Zisserman and C. Schmid“Shape
recognition with edge-based features”. In British Machine
Vision Conference, September 2003
[6]R. Krishnan, “Electric Motor Drives: Modeling, Analysis,
and Control”.
[7] F. Rothganger, S. Lazebnik, C. Schmid and J. Ponce.
“Segmenting, modeling and matching video clips containing
multiple moving objects”.IEEE Conference on Computer
Vision, 2004.
[8]David Benson, “EASY STEP'n, An Introduction to
Stepper Motors for the Experimenter from Square 1
Electronics”
[9]www.ee.ttu.edu/lab/robot/drives.htm. Last time cited
March, 2005.
[10]www.aaroncake.net/circuits/supply.htm. Last time cited
March, 2005.
[11]www.videoocx.de/index.htm?/quotes.htm. Last time
cited March, 2005.
[12]http://www.dll-files.com/dllindex/dll-
files.shtml?kernel32. Last time cited March, 2005.
[13]http://www.osronline.com/ddkx/kmarch/k109_0w8i.htm
. Last time cited March, 2005.
[14]http://www.stepperstuff.com. Last time cited March,
2005.
[15]http://www.precarn.ca/intelligentsystems/details_is Last
cited March 2005.
[16]http://www.spie.org/web/meetings/calls/pw01/confs/EI1
1.html Last time cited March, 2005.
Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) 0-7695-2479-6/05 $20.00 © 2005 IEEE