Object recognition

By July 30, 2017 Architecture

First Introduction and Problem Definition

Today, the object acknowledgment in exposure or picture images of great importance. In more and more countries of pattern acknowledgment algorithms are used to better merchandises and do more attractive to clients. In the car industry are used, for illustration, object acknowledgment method to acknowledge traffic marks or unsafe state of affairss more rapidly and indefatigably active and the driver by exposing or hearable signals to bespeak.

Further image processing algorithms used and of class the object acknowledgment in many countries of production for quality confidence in order to observe mistakes or merchandise automates a bad merchandise quality and increase the quality.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Problem of object acknowledgment

The purpose of this work will be to present different object acknowledgment schemes and place the advantages and disadvantages. In the class of working through a sample application to 45 different objects ( see Appendix 10.2 Parts list ) are detected with an image-processing-based system. Here are the featured object acknowledgment algorithms are tested and evaluated.

Division of labour

Following the introductory foremost chapter, shall in the undermentioned 2nd chapter describes the rudimentss of image processing are briefly discussed. Here is closer expression at the image acquisition, preprocessing, cleavage, characteristic extraction and categorization.

To the subject non merely in theory abzuhandeln, in the 3rd chapter describes the system environment for the job. Here, the hardware and package HALCON 9.0 is explained more to the full supported in the design and confirmation of image sensing procedure before they can finally be incorporated into package merchandises. This deals with the functionality of this powerful package tool and foreground some specific characteristics.

The 4th chapter contains the object to be evaluated acknowledgment algorithms. While these schemes are presented and their suitableness for the job of object acknowledgment received this work. The presented categorization schemes are simple assignments, the bunch method, unreal nervous webs and the matching helper in the scheduling environment of HALCON HDevelop, which is based on templet matching.

In chapter five of the usage of the package HALCON 9.0 will be demonstrated for the job and highlighted the troubles of object acknowledgment and preprocessing for the assorted object acknowledgment algorithms. Another point of this chapter is a elaborate description of the beginning plan with the best sensing rates.

In the 6th chapter, the sensing rates of the enforced object acknowledgment algorithm and the jobs encountered are explained.

The sum-up of the work can be found in chapter seven. In add-on, this chapter includes an mentality on farther jobs.

In chapter eight and nine, the literature and the tabular array of figures are given.

In the appendix, the chief beginning codification and a parts list to verify the truth in object acknowledgment and characteristic tabular array are added in order to compare the different characteristic values for the single objects.

Second Image Processing Fundamentalss

The image processing combines many stairss ( see [ NEUM05 ] ) . These include the pictural sensing of objects, the automatic processing of the images, the cleavage, with extraction of in writing information and the thereon based categorization.

Figure 2.1:Stairss in image processing

For gaining control, the first measure of image processing, the optical image into electronic signals processed is converted. In add-on, the object light plays an of import function in this measure. The pre-processing includes processing stairss for image sweetening that can be achieved by filters. In the image cleavage relevant content will be highlighted, such as certain objects or contours. Besides can be found in cleavage of specific objects in a image. In the characteristic extraction, the metameric objects to be characterized. Are thereby defined features, such as the surface or the rotundity, the objects assigned. In the categorization, the last processing measure of image processing that characterized the objects defined object categories are assigned.

Version 2.1 Images

The algorithmic solution to a job can be simplified with high-quality images. This must be ensured in the first measure of image processing, image acquisition.

An of import status for taking a image, to guarantee a unvarying light in order to avoid brightness profiles within the same colour parts. In add-on, a shadow dramatis personae by the objects and contemplations ( eg the light beginning ) can be avoided, as this can take to jobs in subsequently treating stairss or wrong consequences. Furthermore, it is advantageous to take a field background, as this will simplify the differentiation between object and background and the cleavage.

The image should besides be taken up front from above, so that no deformation. Can be used for a tripod to forestall camera shingle caused or “ oblique ” images. The distance between the object and the camera should ever be the same, so that no unintended expansions or decreases occur between the single images in all recordings. This can besides be guaranteed by a tripod and a locking map of the camera.

Procedure of image gaining control

After entering the image is a planar, time-dependent, uninterrupted distribution of light energy. To acquire this distribution of light energy as a digital image on the computing machine requires three stairss ( californium. [ BURG06 ] ) :

First The uninterrupted distribution of visible radiation in infinite must be scanned.

Second The ensuing map must be sampled in clip in order to obtain a individual image.

Third The single values must be quantized into a finite figure of possible numerical values, so that they can be displayed on the computing machine.

The spacial sampling ( 1 ) is the passage from a uninterrupted to a distinct distribution of visible radiation. This is done by the geometry of the image detector, straight into the camera.

In the timing of trying ( 2 ) the measuring of the sum of visible radiation is effected by the person detector elements. This is done by commanding the exposure clip.

The quantisation of pel values ( 3 ) is done by analog-digital transition. Here, the image values are mapped to a finite set of numerical values.

After these three stairss is the image as distinct map and is described as planar periodic array of Numberss. For colour images, the single colour constituents ruddy, green and bluish individually ( in clip or infinite ) be included.


There are different types of lighting and different visible radiation beginnings. The right pick of illuming contributes significantly to the success of an image processing undertaking. The purpose is to supply a spatially homogenous and temporally stable light over the full country to be evaluated ( ROI – Region of Interest ) to warrant and maintain that images with optimum contrast.

In rule there are two basically different types of light, the reflected and transmitted visible radiation.

When the transmitted light light, the light beginning is located opposite the camera and in between prevarications the object. The disadvantage of this solution is that it shows no colour sensing is possible, as opaque objects emerge merely with their contour. Merely in the semitransparent objects, a colour sensing is possible. The advantage is that there are no upseting shadows. Therefore, the contour of the objects are detected about accurate. In rule, this method is suited merely for procedures play in which the colour of the objects or the surface construction does non count. Further, due to the agreement of camera, object and light beginning, this method is non executable for all applications because it is non ever possible to place a light beginning behind the objects.

In the epi-illumination, the lighting is on the same side of the object as the camera. The advantage is that colourss are seeable with this method. The disadvantage here is that upseting shadow can originate, which can be debatable during acknowledgment.

2.2 image processing – Filters

Filters are operations in which an input image is converted by agencies of mathematical maps to an end product image. They are used for image sweetening and processing of images for subsequent sensing stairss ( such as border sensing ) . Filters work with a alleged filter window or structural component, which are normally given environments square of pels. Here, the window can, for illustration, 3×3, 5×5, etc. to be big. The pels in a filter window are normally equipped with weights, so pels are more immediate and more distant considered weaker.

In HALCON are simple structural elements ( rectangles and circles with arbitrary dimensions and radii ) are available. You can non whizz in or out, depending on the way and objects equally in all waies.

There are two types of filters, additive and nonlinear filter.

2.2.1 Linear Filters

Linear filters are additive homogenous maps and are besides known as folding. They have an reverse transmutation and necessitate small computational attempt.

For additive filters, the size and form of the filter window and the associated weights is specified by a matrix of filter coefficients, the alleged filter matrix.

A filter operation is performed by a filter matrix ( filter operator ) is pushed over the point-wise to be filtered image. Then, the filtered image pels assigned to the centre of the amount multiplied by the filter matrix pels of the original image ( see Figure 2.2 ) . For colour images, this procedure is performed for each colour constituent.

Figure 2.2:Schematic of a additive filter [ DEMA02 ]

  • Smoothing filter/ low base on balls filter ( eg, box filter, Gaussian filter )

    The smoothing filter is applied to locally quickly changing grey values? ? to counterbalance for this. These fluctuations are normally caused by noise. To smooth out these fluctuations, the average grey value is calculated in the filter window and the consequence of the current pel in the end product image ( the centre pel of the filter window ) written.

    The upseting grey value extremums disappear by averaging the grey value of the full filter matrix. Unfortunately, all right constructions and borders are blurred after filtrating. The larger the filter window, the better is the smoothing, but it will be lost besides the more information.

  • Differential filter/ high-pass filter ( eg Laplacian filter )

    During the smoothing filter to counterbalance the grey value differences, they are amplified in the differential filter. The differential filter is ideal for foregrounding lines and borders or other crisp gray-scale fluctuations. The computational execution is like the smoothing filter. The differential filter can incorporate positive and negative coefficients.

To exemplify, in Figure 2.3 Examples of additive filters are listed with their filter matrix.

Figure 2.3:Examples of additive filters [ BURG06 ]

  1. box filter,
  2. Gaussian filter,
  3. Laplacian filter

2.2.2 Non-linear filters ( morphological filter )

Morphologic image operators change the form of objects in an image. Here, the vicinity of a pel with is included. Nonlinear filters have no reverse operation and necessitate more computational attempt than the additive filter. They are besides used to rectify erroneous perturbations, called artefacts, including several that are really separate objects, linked by Bridgess, or have the eccentric contours of objects that have no resemblance to the original objects. It comes with a preset construction component, the image is scanned line by line, and summarized at each site, the implicit in grey values? ? as an ordered sequence, ie the sequence begins with the smallest grey value and ends with the greatest. Depending on the filter is so a pel from the sorted sequence is selected ( eg, minimal or maximal ) . Therefore, these filters are besides called rank order filter. The undermentioned illustrations are a few of import non-linear filters are briefly presented.

Median filter

The average filter is in contrast to the norm operator ( as is the smoothing filter or difference exists ) is an edge-preserving smoothing filter, which takes longer but by his kind of calculating clip. It is used to stamp down punctiform perturbations without bring forthing a bleary image. The operation of the average filter is that the grey value of the current pel by the value lying in the center of the ordered sequence is replaced.

By a little alteration of the average filter, an eroding ( shrinking ) and a distension ( enlargement ) of a part can be achieved. It can besides be eroding and dilation, and frailty versa, performed one after another ( opening / shutting ) . Here, the consequence of eroding can non be by a dilation, and frailty versa, undone.

Minimum filter( eroding )

The minimal filter removes boundary pels of objects and is used eg for shrinking or to extinguish little bright objects. In this operator, the grey values of the current pel by the smallest value of the ordered sequence to be replaced.

At the minimal filter, the tip is removed high grey values without bring forthing a bleary image, but the discolorations low grey values can be increased.

Maximal filter( dilation )

The maximal filter enlarged bing object constructions by adding new pels and is used for illustration for make fulling little holes or for uniting pel groups. Basically, the distension is nil but the eroding of the background. Here, the grey value of the current pel by the maximal value of the ordered sequence is replaced.

At the maximal filter to take little spots of low grey degrees and increased high extremums of grey values.


When opening an eroding followed by dilation. This serves for the separation of objects that are connected by Bridgess or wipe outing little sum of elements. A disadvantage is that it can turn little holes. With the gap operator can besides be some signifier of the image are eliminated or detected.


The trade is to guarantee that little non-contiguous countries of the image to be closed, intending that vanish little holes that are formed during the cleavage. This occurs when following a distension an eroding. The downside is that antecedently separate objects can turn together.

2.3 Cleavage

The purpose of cleavage is to obtain an image by of import parts are highlighted and can be distinguished from the background ( see Figure 2.4 ) . The cleavage is a divider of a set of pels into single metameric objects dar. It must be the decomposition wholly, clearly and without overlapping. In add-on, every pel precisely one object or section are assigned.

There are many cleavage techniques. You can point-in- , edge-oriented, region-based and rule-based methods are classified. Since this work was used for merely the point-in procedure, is discussed in more item merely to that.

Figure 2.4:Example of cleavage,

  1. original image,
  2. identified parts

2.3.1 Point Operationss

Point operations are operations on images, which draw no alteration in size, geometry and local image construction by itself, but affect merely the values of single image elements ( californium. [ BURG06 ] ) . Therefore, each pel will be considered independently of others and segmented.

Some illustrations of point operations are:

  • Change the contrast and brightness,
  • Restriction of the return value ( clambing )
  • Inversion of images,
  • Thresholding ( thresholding ) .
  • Since the thresholding operation is an of import point and that given in the job of object acknowledgment is used will be discussed in this work merely on it. The thresholding is an effectual method for cleavage of images in which there is a large contrast between the object and the background. The computational cost is really low and the algorithm ever returns separate parts.

    When thresholding the image values are divided into two categories, wherein the first category of the background ( value 0 ) and the 2nd category of the object or objects ( value 1 ) represented. The categories are dependent on the threshold value ( threshold value ) . In this operation all pels of a fixed point of two strength values are assigned so that, after the operation, a binary image.

    To find an appropriate threshold can assist histograms. In a histogram of certain information to be made through a compact image seeable. This can be read so the size of the effectual strength scope used and the uniformity of the frequence distribution.

    For a gray-value histogram, the frequence distribution of grey values of an image can be read ( illustration see Figure 2.5 ) . It shows for each possible grey value is the figure of pels that have this grey value. Although the location of the pel information is lost, histograms are really utile for cleavage. A suited threshold value can be, for illustration, the histogram shown in the comparative lower limit. In Figure 2.5, for illustration, a value could be used by approximately 160 as a threshold value for cleavage.

    Figure 2.5: ( a )Original image,( 2 )grey degree histogram

    2.3.2 Region grade

    By the point operation is known which pels to the foreground and which belong to the background. In the part marker is found, which foreground pels belong to an object. An object is a group of immediate foreground pels. In order to obtain an object, the metameric parts are divided into immediate countries. Thereby reciprocally next pels are assembled stepwise to parts. This is, for illustration, with an 8-neighborhood possible. This is a pel on an object when it is straight or diagonally next to one of its pels. All pels that are within a part will have a alone designation figure ( label ) .

    To cipher HALCON affiliated parts, the connection-operator is used.

    2.4 Feature Extraction

    After separation of the objects / parts from the background, some features are collected from the objects so that they can be decently described and classified subsequently.

    2.4.1 form characteristics ( characteristics )

    Shape characteristics are specific numerical or qualitative features of a part, which are calculated from their pels. This part is described every bit clearly as possible, several characteristics are combined into a vector. This serves as a sort of signature for the part, so the part can be distinguished from one another categorization.

    The followers are some characteristics are described that are used for the processing of the job. It is in parentheses behind the corresponding characteristic of the HALCON operator.

    Scope( contlength )

    The extent of a part is determined by the length of its contour, ie determines the figure of boundary pels of an object, wherein the part must be immediate.

    Area( area_center )

    The country in a binary image is the figure of pels which belong to the object. Here, the pels are summed belonging to the object.

    Circularity( rotundity )

    The step of the rotundity is determined from the volume and surface country of the metameric object. The rotundity of an object is referred to as concentration. It is calculated as follows:

      K = U 2 / ( 4p * F )

    The value for K is equal to one if it is a circle. The value of K is higher, the greater the extent of an object is in relation to its country.

    This characteristic can be used to separate linear objects of non-elongated objects. With the rotundity of the distance of the contour of the accent of the surface is examined.

    Eccentricity( eccentricity )

    The eccentricity is the insufficiency of a part. It is determined by the minutes ( see the “ minutes ” ) .

    The characteristic of the eccentricity is a part enveloping the oval with maximal aspect ratio and is defined as follows:

      Eccentricity ( R ) = [ [ m20 ( R ) – m02 ( R ) ] 2 + 4 * [ m11 ( R ) ] 2 ] / [ m20 ( R ) + m02 ( R ) ] 2

    This step gives values between nothing and one. It has a unit of ammunition object, the eccentricity of nothing and an highly extended object has the value one.

    The eccentricity calculates three form characteristics that are derived from the geometrical minutes. The anisometry ( Anisometry ) , the bulkiness ( Bulkiness ) and the construction factor ( construction factor ) .

    Anisometry = Ra / Rb

    Bulkiness = P * Ra * Rb / F

    Structure Factor = Anisometry Bulkiness * -1

    in which

    Circularity ( disk shape )

    In the disk shape, the similarity of the part is calculated with a circle. The similarity is defined as follows:

      C = F / ( p * max2 )

    Where F is the country of the part and soap is the maximal distance from the centroid of all contour points. If the form factor C is equal to one, is a circle. The signifier factor is less than one, if the part is elongated or concave surfaces has.

    Diameter of a part( diameter_region )

    At the diameter of a part of maximal distance between two boundary points of a part is calculated. The co-ordinates of the two utmost points and the distance between these points are returned.

    Intensity / brightness( strength )

    With the strength of the mean and the discrepancy of the grey values of the input image are calculated within the parts.

    Squareness( Rectangularity )

    With this characteristic, the squareness of the input part is calculated. Here, a rectangle is determined that the same first and 2nd minutes has as the input part. Then, the difference between the estimated rectangle and the input part is calculated and therefore the Rechteckigkeitsma? obtained.

    Convex hull, convex, denseness( convexness )

    The convexness of

      C = Fo / Fc

    calculated. Wherein Fo is the original surface of the Fc part and the surface of the convex hull.

    The convex hull can be determined with morphological operations.

    Gravity( area_center )

    The centre of gravitation is calculated from the norm of the row and column co-ordinates of all points of a part. If the co-ordinates are from the object focal point known, can be easy recognized, whether the focal point lies within the object or non. This can be a simple decision-making characteristic for object acknowledgment.

    In HALCON, the focal point will be calculated together with the surface.

    Moments( moments_region_central_invar )

    Moments are used to organize description. With the 2nd minute Order begins beyond the country measuring analysis of signifier, since the first minute Procedure by definition is zero and the minute 0th Regulations, the face of a grey graduated table or binary object provides. The three 2nd minutes Order ( M2, 0, m 0, 2, M1, 1 ) are to be understood as in mechanics. So you are the minute of inactiveness for revolving the object around its centre of gravitation. The minutes obtained footings in which the denseness of the object with the square of the distance from the centre of gravitation is multiplied.

    The cardinal minutes ( I1, I2, I3, I4 ) are calculated as follows:

      I1 = m20m02 – m112

      I2 = ( m30m03 – m21m12 ) 2 – 4 ( m30m12 – M212 ) ( m21m03 – M122 )

      I3 = m20 ( m21m03 – M122 ) – m11 ( m30m03 – m21m12 ) + m02 ( m30m12 – M212 )

      I4 = m302m023 – + 6m30m21m11m022 6m30m12m02 ( 2m112 – m20m02 ) + m30m03 ( 6m20m11m02 – 8m113 ) 9m212m20m022 + – + 18m21m12m20m11m02 6m21m03m20 ( 2m112 – m20m02 ) 9m122m202m02 + – + 6m12m03m11m202 m032m203

    This is the first minute ( I1 ) is the expected value. The 2nd cardinal minute ( I2 ) is the discrepancy. The 3rd cardinal minute ( I3 ) after standardization, the lopsidedness ( symmetricalness of a

    Distribution to the mean ) and the 4th cardinal minute ( I4 ) is after normalising the curvature ( kurtosis and surplus ) .

    The normalized cardinal minutes ( PSI1, PSI2, PSI3, PJ4 ) are invariant, ie, minutes that remain unchanged during gesture and additive transmutation. These are calculated as follows:

      PSI1 = I1 / m4

      PSI2 = I2 / m10

      PSI3 = I3 / m7

      PJ4 = I4 / m11

    Where m = m00 = is the surface country. The minutes are used to observe objects which are present in different orientations.

    Orientation( orientation_region )

    The orientation is the way of the major axis, ie the axis widening through the centre and along the highest extent of a part. It is calculated utilizing the normalized centered minutes.

    The orientation Phi is defined by:

      Phi = -0.5 * atan2 ( 2.0 * m11, m02 – m20 )

    Figure 2.6: ( 1 )objects,( 2 )objects with orientation pointer

    Figure 2.6is an illustration of how the orientation of objects can be displayed in order to exemplify this visually.

    2.4.2 Colorss

    Colorss are besides an of import characteristic of objects. They are a simple and sometimes really of import discriminator. RGB colour images

    RGB colour images are a combination of the three primary colourss red ( R ) , green ( G ) and bluish ( B ) . The RGB colour system is linear, Internet Explorer, the colour commixture is done by add-on of single colour constituents, based on black.

    The RGB colour infinite forms a 3-dimensional regular hexahedron ( see Figure 2.7 ) , which correspond to the co-ordinate axes of the three primary colourss R, G and B. Here are the RGB values are positive and have the scope [ 0,255 ] . Any colour ( Fi ) corresponds to a point within the colour regular hexahedron with the constituents Fi = ( Ri, Gi, Bi ) , where 0 = Ri, Gi, Bi = 255

    Figure 2.7:Color Cube [ UNIM1 ]

    A colour image is composed of three grey images, each image is cast for the several colour values of ruddy, green and bluish. Make it all the minimal value, the consequence is black, white at the maximal value. HSI colour infinite

    In the HSI colour infinite, the colourss of the colour ( chromaticity ) are described in the impregnation ( impregnation ) and strength ( strength ) .

    Figure 2.8:colour wheel in the HSI colour theoretical account [ NEUM05 ]

    The colourss are defined by angular co-ordinates. This all colourss together give the colour wheel ( see Figure 2.8 ) . If the colour is, for illustration, at 0 A° , it is ruddy, at 120 A° around to bluish green and at 240 A° .

    The impregnation indicates the strength of the dye concentration in an object, that is, how much white light the colour is added. At impregnation = 0, for illustration, no colour is seeable, so it is a shadiness of grey. In the colour wheel, impregnation is the distance from the centre.

    The strength indicates the strength with which to see a colour that is, how much sum visible radiation is present. The HSI colour infinite corresponds to a cylindrical form, while the strength of the perpendicular co-ordinate is to the image plane.

    The transition into the HSI colour infinite is utile for the differentiation of colour, because it is merely one parametric quantity as a characteristic ( colour ) must be taken into history for the differentiation. In the RGB colour infinite for all three colour constituents are required.

    In Figure 2.10, the original image ( Figure 2.9 ) split into its three colour constituents ruddy, green and bluish. There are, for each colour a separate screen. In Figure 2.11, the converted into the HSI colour infinite, the original image can be seen. The three images represent the chromaticity, impregnation and strength.

    Figure 2.9:original three-channel

    Figure 2.10:RGB image,( 1 )ruddy channel,( 2 ) green channel, ( 3 )bluish channel

    Figure 2.11:HSI image( 1 )colour,( 2 )impregnation,( 3 )Intensity

    The operator converts a three-channel decompose3 of HALCON image in three-channel images around, Internet Explorer from the original image, the ruddy, green and bluish image.

    Trans_from_rgb with the operator of the HALCON library is an image from the RGB colour infinite to another colour infinite ( eg HSI ) transformed.

    The transition expression for the transmutation into the HSI colour infinite is:

    2.5 Categorization

    The purpose of categorization is to be characterized by the cleavage of the objects found and their features. There are several categorization methods. Since the object acknowledgment algorithms are an indispensable portion of this work and will depict in more item, it is merely discussed in more item in Chapter 4.

    Third Environment

    This chapter describes the hardware and package that were used in this work is described.

    3.1 Hardware

    For this work, three camera systems were tested to happen out which is best suited for the job. The undermentioned camera systems were used here:

    • Casio Exilim,
    • Logitech Webcam
    • UEye.

    The first trials were carried out with the Casio Exilim digital camera. With this camera, it was hard to obtain equally open images. The job here was the automatic manner and the flash. The flash visible radiation led to a crisp shadows, impacting negatively on the acknowledgment. Without flash, nevertheless, was the lighting of the images to be taken ailment or had a different visible radiation beginning for a suited light. The automatic manner besides had to be disabled in order to avoid differences in image light and different colour reproductions. Another job was a deficiency of chance, the camera at a fixed point to be fixed, to obtain images with a changeless pitch and a individual camera angle. For these grounds, the thought of utilizing this camera discarded.

    The webcam from Logitech had deficient declaration and the extra job that the scenes such as brightness, contrast, focal point, etc. could non be changed manually. The camera has been adjusted automatically at all. This meant that the recordings were ever different, and these images could non be used for sensing. The biggest job was the strong differences in brightness in the images.

    The uEye camera the best consequences were achieved. This was because the scenes of the camera could be controlled wholly by manus. Therefore for every job could be found optimum scenes, which were so obtained by a parking possibility. Another advantage was the handiness of a tripod for this camera, so that all images have an indistinguishable camera angle and distance could non be guaranteed.

    For the most unvarying possible light an annulate fluorescent tubing was used, which was positioned above the camera. This enabled the images can be illuminated uniformly, without holding been produced by the camera ‘s shadow. Furthermore, it could be minimized every bit far as possible through this light of the shadow dramatis personae by the recognizable parts.

    Figure 3.1:Workspace

    In Figure 3.1 of the building and agreement of the hardware constituents is illustrated. This work country was in the experimental process and imagination are used.

    3.2 Software – HALCON

    HALCON is an image analysis system from MVTec, which was developed for usage in industry, instruction and research. In this work the version 9.0 package was used. The package offers a big library, which includes a broad scope of image processing methods. It includes more than 1,400 operators, which provide maps for image processing and analysis. The undermentioned subdivision discusses the cardinal elements and maps. A elaborate description of all the possibilities offered by HALCON is beyond the range of this work.

    3.2.1 Construction

    HALCON consists of ( see Figure 3.2 ) :

    • Synergistic scheduling environment HDevelop,
    • Classs that the scheduling languages C, C + + , C # , do, Visual Basic and Delphi, the library,
    • A big, comprehensive library of image processing operators ( HALCON Image Processing Library )
    • Extension of the bundle interface that allows users to incorporate bing and new algorithms in HALCON
    • The Image Acquisition Interface interface ( image acquisition interface ) .

    Figure 3.2:Architecture of the HALCON system [ HALC1 ]

    3.2.2 HDevelop

    HALCON provides an synergistic scheduling environment HDevelop, which has an editor, an translator with debugging characteristics, a direction unit for variables and extended visual image capablenesss. A plan in HDevelop created by the given operators are selected and filled with meaningful parametric quantities. After interpolation into the beginning codification, the ensuing plan can be modified and tested. In add-on, HDevelop some aces that enable complex undertakings such as matching or image acquisition, with a few mouse chinks via the graphical user interface control.

    Figure 3.3:HDevelop scheduling environment

    In HDevelop environment, there are four chief window, the artworks, the variables, the operator and the plan window ( see Figure 3.3 ) .

    The artworks window is used to visualise the original and the processed image informations. Several in writing Windowss are created. Thus ever, all of import image informations is structured clearly displayed and can be visualized as required at any phase of processing.

    In the Variables window the contents of the variables used are displayed during the period. This can be changed in order to acquire a good position between the different variables.

    The cardinal undertaking of the operator ‘s window to come in or alter the operators used. The input is structured masks, in which default values are proposed for the operator parametric quantities. The structured format of the input reduces the hazard of syntax mistakes. Furthermore, the operator window to straight entree the context-sensitive online aid for HALCON possible.

    In the plan window, the beginning codification of the plan is shown. Besides, set breakpoints and individual lines are enabled or disabled. A plan can besides be in single-step manner, running, being able to be tracked in the plan window precisely which lines of codification were executed and where the plan is consecutive. So it is possible to debug the plan and mistake determination.

    The operator bill of fare is available to all operators, which are divided into categories displayed. If there is an operator is selected, it is transferred to the operator window and displayed. There, so, the parametric quantities can be entered or the default values are changed. If the parametric quantities are so confirmed with OK, the operator is taken with the selected parametric quantities in the codification.

    About the Visualization bill of fare, images and parts that have been generated in the class of the plan can, hence, the consequences of the single image processing stairss are shown. Furthermore, there are several extra visual image bill of fare redacting and show capablenesss to prove thoughts for bettering the plan or to acknowledge. Thus, for illustration histograms are displayed and analyzed to find, for illustration, meaningful threshold-sleepers.

    With the aid bill of fare can be achieved in the mention manual in which the descriptions of the assorted operators and the assorted parametric quantities are described.

    With the ace bill of fare can of fiting helper and the image acquisition helper will open, leting a few mouse chinks to pull off complex undertakings utilizing the GUI.

    When making a HDevelop plan beginning codification line by line is created. The following operator will be selected by typing straight in the operator window or choosing the bill of fare point. After corroborating the input parametric quantities of the plan window is inserted at the current interpolation place of the beginning codification for this operator. The interpolation place is indicated by a little trigon, the insert pointer at the left side of the window.

    4th Object acknowledgment algorithms

    In this chapter four different object acknowledgment schemes are presented. It is the simple assignment, the bunch methods, particularly the “ nearest neighbour ” process, the preparation of unreal nervous webs and the Matching Wizard of HALCON in the development environment HDevelop, which is based on templet matching.

    4.1 Assignment

    The easiest manner is to sort objects, the manual assignment of found objects to the single object categories dar. In a first measure all the of import object features by which to separate the objects can be defined. For each object category are so calculated by agencies of some meaningful sample images, the values for the several characteristics. Then for all characteristics of each object category, the scope is determined. The plan will so be checked for each object category identified on the footing of the existent belongings values, whether these are all defined in the several scopes of values for the object category. If this is the instance, so this means that the object of this object category is assigned. If non all of the characteristics in the defined countries, so the value scope of the following object category is checked. This procedure is repeated until the detected characteristics of an object category may be assigned or all object categories have been tested without any consequence.

    In the followers, an illustration to show this process:



      The smallest value

      The largest value

      Object 1




      Object 1

      Color value



      Object 2




      Object 2

      Color value



    Beginning codification for object categorization ( pseudo linguistic communication ) :

    if ( ( Flaecheninhalt & gt ; = 500 ) & A ; & A ; ( Flaecheninhalt & lt ; = 600 ) & A ; & A ;

    ( Value & gt ; = 120 ) & A ; & A ; ( colour value & lt ; = 160 ) ) print ( “ Object 1 detected ” ) ;

    else if ( ( Flaecheninhalt & gt ; = 550 ) & A ; & A ; ( Flaecheninhalt & lt ; = 650 ) & A ; & A ;

    ( Value & gt ; = 200 ) & A ; & A ; ( colour value & lt ; = 230 ) ) print ( “ Object 2 detected ” ) ;

    The advantage of such a method lies in the simpleness of the method. This can be provided that the object figure is manageable, easy to implement.

    The disadvantage is that the method works merely if the value ranges of two object categories do non overlap for all traits. If that ‘s the instance, it may go on that an object is clearly an object category may be assigned. Since the method is non with chances or distances from a defined averaging characteristic works, but with defined scopes of values, in this instance, no statement can be made about how likely it belongs to one or the other object category. An assignment is non possible. Another job is when merely one characteristic value is minimum outside of the defined scope of values for the right object category, because it can no longer be the object of this category are assigned.

    Another disadvantage is that the definition of the scope a batch of trial instances required for each object category in order to accomplish even with different conditions ( eg lighting, noise ) a good sensing rate, or there must be a sensible tolerance permitted on the scope limits However, what may negatively impact the sensing.

    Because of these factors can be concluded that this process is so easy to implement, but is particularly due to the deficiency of hardiness to different conditions, non suited for good object acknowledgment.

    2.4 Clustering methods

    Cluster analysis is a statistical, heuristic method for systematic categorization of observations. The purpose of bunch analysis is to supply a given set of objects into bunchs ( groups, categories ) to split. It should be within the bunch homogeneousness regulation, that is, objects that belong to the same bunch should be every bit similar as possible and associated heterogeneousness between bunchs, Internet Explorer objects which are different bunchs, should be really different ( see [ HA-PP97 ] ) .

    The category distribution should be disjoint, that is, each object belongs to a category exactly and thoroughly, ie all objects must be assigned to be. Another rule of bunch analysis is that the figure of bunchs should be chosen every bit little as possible, because the computation clip is excessively big. There are seven different techniques of bunch analysis: the incomplete, the deterministic, the imbrication, the probabilistic, the possibilistic, the hierarchal and the “ Objective-Function ” bunch analysis method. As to the job of object acknowledgment is merely the probabilistic bunch analysis method in inquiry will be discussed here merely to this farther.

    In the probabilistic bunch analysis method for each object a chance distribution determined by the bunchs, which indicates the chance with which an object is assigned to a bunch. An illustration of this constellating method is the “ k-nearest neighbour ” method. Here, the nearest neighbour from across the set of objects is determined. This is calculated by the Euclidean distance ( distance computation ) the distance between the associated characteristics and all other features. Thereafter, the several K neighbour by the shortest distance is determined. Lie all K characteristics in a bunch, the object is assigned to this bunch. Are non all the K parametric quantity in a bunch, so it is decided by bulk ballot, that the object is assigned to the bunch of beginning of the bulk of nearest neighbours.

    Euclidian distance:

    Here xi is the value of each identified feature of the Lolo and optimum characteristic value comparing. The Euclidian distance is invariant with regard to interlingual rendition, rotary motion and mirroring, but non invariant under scaling.

    At this distance computation two objects are called similar if their distance is really little. This means big distances, that a little similarity exists. If the distance is equal to zero, the two objects are indistinguishable.

    The bunch method has two stages, the teach-in and the categorization stage. In the learning stage can be formed utilizing the known trial set of bunchs and is in the categorization stage is carried out for an unknown object by comparing its characteristic vector with that of the known bunch a categorization. It is frequently utile to present a rejection category in which all pels are classified, with the distance to the closest pattern category greater than a threshold value, the rejection radius needs to be. Due to this minimum-distance classifier with a fixed radius, the rejection form categories in the planar characteristic infinite is represented by circles.

    This method is suited for the job of object acknowledgment does non fit the given informations set of objects to be identified because the record is excessively big and therefore non representative consequences are expected and the computational cost is excessively great. The “ nearest neighbour ” method besides tends to concatenation formation. However, many big bunchs are formed, as near together elements belonging to different groups, can non be separated precisely. Furthermore, this method is non supported by HALCON and hence should be implemented consuming.

    4.3 preparation, nervous webs

    Artificial nervous webs mimic the organisation and processing of the human encephalon. They are characterized by their ability to larn, that they can larn a undertaking on the footing of preparation illustrations, without holding to be programmed explicitly, its high mistake tolerance, their high correspondence in information processing and its hardiness to little disturbances and informations defects.

    Nervous webs are concerned with belongingss of webs with crude maps ( californium. [ ROJA96 ] ) . Here, the crude maps sit at the nodes ( nerve cells ) of the web and are normally non more than a summing up of information and / or unidimensional non-linear maps. Nervous webs consist of links ( borders ) and nerve cells that perform the map rating. The information transmittal takes topographic point merely over the borders of the web, wherein the transportation of information is modeled on an border weight, Internet Explorer, the information conveyed is multiplied by a numerical factor and scaled in this mode. It is weighted and unweighted nervous webs. As in the unweighted webs, the web topology ( interconnectedness form ) can alter through acquisition algorithms, they are non considered farther here.

    The information is processed by the nerve cells activated by agencies of directed links between them. The extremely idealised nerve cells consist of an input vector, a weight vector, an activation map and an end product map.

    In unreal nervous webs is a black box, which should bring forth a certain measure of an input end product volume ( see Figure 4.1 ) .

    Figure 4.1:Artificial nervous web ( black box )

    In the black box, multiple beds may be present. This is known as a superimposed architecture, where there is an input bed ( ten 1, … , xn ) , an end product bed

    ( Y 1, … , y m ) and n beds are hidden. Here, the nerve cells of a bed are non interconnected. The nerve cells of next beds are arranged in a complete bipartite graph, ie each nerve cell in one is connected to every nerve cell in the next bed ( see Figure 4.2 ) .

    In the input layer no processing is done, it will be distributed merely the input values to the input values of the first concealed bed. The end product bed gives the end product values of the full web.

    Figure 4.2:Layered architecture of a nervous web

    Because of the preparation costs addition by the figure of weights of a web with each nerve cell, there are several different larning methods. One popular method is the backpropagation larning algorithm. Here, the lower limit of the mistake map of a peculiar acquisition job by the gradient descent is searched. The solution of the acquisition job is the combination of the weights of a web that minimizes the mistake map. A job with this is that it can be dependent on the low-level formatting of the random figure generator ( RandSeed ) which is used for low-level formatting of the nervous web with random values, a comparatively high mistake determined as the optimal value. This means that the optimisation can acquire stuck in a local lower limit. If this is the instance, the web should be trained with a different low-level formatting value.

    The larger the web, the greater the mistake surface is rugged. That is, increases the figure of local lower limit and this makes it hard to happen the planetary lower limit.

    Backpropagation webs are used when certain inputs are to be mapped to specific outgo. This is the acquisition job is to happen a map that maps most closely the preparation end product values of the corresponding preparation input values. The maps can be changed by modifying the web weights.


    Before each issue there is an activation map, with which the end product is calculated. There are different types of activation maps, the linear, the threshold or the sigmoid map.

    In the additive map ( see Figure 4.3 ) of the characteristic infinite is divided into hyperplanes, matching to a classifier. Since this map is limited in the applications, this is seldom used in pattern.

    Figure 4.3:Linear map [ WIKI2 ]

    The threshold map ( see Figure 4.4 ) allows us to bring forth distinct end product activations. So these are a common activation map.

    Figure 4.4:threshold map [ WIKI2 ]

    The most common activation map is the sigmoid map ( see Figure 4.5 ) . It belongs to the category of semi-linear maps, the advantage is that it is differentiable everyplace.

    Besides smoothes the sigmoid map, the mistake map.

    Figure 4.5:sigmoid [ WIKI2 ]

    In the backpropagation the sigmoid map is used as the activation map. It is defined as follows:

    The operation of the iterative process ( backpropagation algorithm ) is as follows ( see [ NISC07 ] ) :

      First At the beginning of the weights of the nerve cells receive a random value.

      Second A random brace of preparation, which consists of an input vector and the corresponding mark end product vector is selected.

      Third The end product vector is calculated utilizing the forward map.

      4th Therefrom an mistake step between the end product vector and the mark vector is calculated.

      5th Then, the weights are modified so that the grade of mistake is minimized.

      6th Stairss 2-5 are repeated until the grade of mistake is acceptable little, ie until a certain threshold or reaches a preset figure of base on ballss is.

    In this sequence, the stairss 2 and 3 as a forward measure and the stairss 4 and 5 as a measure rearward ( back extension ) are known.

    The more complex the input and end product relationships and the greater the figure of feature-to-learn assignments, the more concealed nerve cells are needed. It should, nevertheless, the figure of concealed beds and the figure of nerve cells per bed should be minimum, since vanishing due to inordinate figure of generalisation, ie the ability to non larn the relationship between objects, right reproduced. This job besides occurs when it is trained for excessively long.

    The sequence of the categorization with nervous webs can be divided into two countries, foremost in the preparation and the other in the categorization stage.

    In the first stage of preparation, the preparation images are loaded and read. Then be calculated from the images for each object, the characteristics defined and constructed from the preparation informations. The preparation informations are now being trained by the nervous web. Here, the connexion weights in conformity with the larning algorithm ( eg utilizing the back extension algorithm ) independently adjusted so that the nervous web can work out the coveted undertaking. The purpose of the preparation is to treat unknown input form is non decently trained.

    In the categorization stage are besides laden images for object acknowledgment is to be performed. Here, excessively, merely like the preparation for each object found in the image of the characteristic vector is calculated. Then this characteristic vector is passed to the trained web and the categorization is started. The consequence is a trained object category of the object is assigned. If the web has good generalisation belongingss, the consequence is most likely the right object category.

    A great advantage of this method is that the object acknowledgment based on preparation illustrations can be learned without being explicitly programmed. Further grounds of the high correspondence in information processing is really positive. Even the most complex object acknowledgment undertakings can be processed with high public presentation. The high mistake tolerance and hardiness to little disturbances and informations lacks are another plus point for this procedure. A disadvantage of nervous webs is that it is hard to set up an optimum web topology. For suited parametric quantities for the input, hidden and end product bed to obtain, many trials and ratings are necessary. Another disadvantage is that the algorithm in a local lower limit of the mistake map can acquire stuck and therefore does non make the optimum sensing rates.

    Due to the advantages and disadvantages and the fact that this sensing algorithm is besides suited for a big figure of objects, it can be used for the job of object acknowledgment. In add-on, this method is supported by HALCON.

    4.4 Matching Assistant

    The matching helper in the development environment of HALCON HDevelop based on templet matching ( template matching ) . The shape-based matching, a templet is generated from the part, which is compared in exercising images with the detected objects. In this instance, the object is stored as a templet and so each pel and its neighbours, it is checked how big the similarity to the templet is predetermined. The templet matching is computationally intensive and therefore time-consuming.

    The procedure when templet matching is that a templet ( templet ) of the object to be detected is slid over a trial image. It is determined to what extent the part located under the templet matches the searched object. This is repeated until the templet has been tested over all places of the image in all possible orientations. Since this method is really complex, it is merely suited for a little figure of objects.

    In the templet matching in the preparation image, a templet will be displayed. For this templet, a theoretical account is generated. This theoretical account is used to happen objects in the trial image, the similarity with the templet.

    The template-matching does non necessitate any sort of cleavage and is besides robust and flexible, since it is invariant to the lighting, size, rotary motion and place of the object, but are dependent on the pick of method.

    HALCON offers several methods for templet matching. These are the gray-value-based, the shape-based, the component-based, the correlation-based, the position, deformable, the descriptor-based, 3D, and the point-based matching.

    The matching helper in the system environment HDevelop merely supports the shape-based matching method. This method is robust to resound, jumble and non-linear light alterations. She is besides invariant to scaling and rotary motion.

    The procedure of shape-based matching is divided into two stages. In the first stage, the theoretical account is described and created. It can be stored in a file and used in several applications. In the 2nd measure, the theoretical account is used to happen and turn up an object.

    Below the matching helper HDevelop is presented gross.

    Figure 4.6:Matching Wizard ( model coevals )

    In the production theoretical account ( see Figure 4.6 ) , an already assembled theoretical account is loaded and processed, or make a new theoretical account. Rebuilding a theoretical account foremost is an image ( template image ) is loaded on which the object to be detected is shown. Be created at that place needs to be manually laden image, the ROI ( part of involvement ) . Here, the parts in an axis-parallel rectangle, an arbitrary rectangle, a circle, oval, or an arbitrary form can be drawn. To repair the part, the right mouse button is pressed. From the part in the ROI theoretical account is so automatically generated. The ensuing theoretical account parametric quantities can be automatically generated and manually defined and set. Thus, here, for illustration, scaling accommodations or accommodations are made to contrast that affect the creative activity of the theoretical account.

    Figure 4.7:Matching Wizard ( model application )

    In the theoretical account application ( see Figure 4.7 ) , trial images are loaded on the created theoretical account is to be found. In add-on, alterations in the theoretical account application, the default hunt parametric quantities and the sensing rate can be optimized. In the optimisation of optimisation can and the acknowledgment rate can be adjusted. As of optimisation can be selected from three different attacks. The first “ happen the figure of cases specified in the above tabular array ” , the 2nd is “ happen at least one theoretical account case per image ” and the 3rd is “ Maximal figure of theoretical account cases are per image. ” The parametric quantity “ sensing rate ” , the acknowledgment rate can be expressed as a per centum, the hunt for the object or at least be met precisely.

    Figure 4.8:Matching Assistant ( Inspection )

    During the review ( see Figure 4.8 ) , the sensing rates and find which statistics are read. This is to guarantee that the parametric quantities can be tested and modified for hapless consequences. In this finding, the sensing rate can be try to look into the parametric quantity scenes easy and without holding the beginning codification to be generated. This saves a batch of clip and can be read comfortably.

    Figure 4.9:Matching Wizard ( code coevals )

    The codification coevals ( see Figure 4.9 ) allows it to rapidly bring forth the beginning codification. Here, the variable names are set to be used. Another pick is whether the theoretical account image is displayed at run clip or straight created the theoretical account file to be loaded. In add-on, the map of a codification prevue in which the beginning is clearly displayed in a tabular signifier.

    In Figure 4.10 and 4.11 are illustrations of theoretical accounts created from images and theoretical account contours seeable.

    Figure 4.10: ( 1 )original “ green rocks ”( 2 )contour theoretical account “ green rocks ”

    Figure 4.11: ( 1 )original “ wheel ”( 2 )contour theoretical account “ wheel ”

    The Matching Assistant is fundamentally a good and easy manner without a background, a simple object acknowledgment techniques to plan and implement. However, this method is suited merely if the figure is little to be recognized objects and the difference between the contour of the object surface significantly. In item, the sensing rate can so be optimized utilizing the theoretical account scenes, but that does a batch of manual work and it requires elaborate cognition about the effects of each parametric quantity helpful. This is all but the first-mentioned advantage of simpleness, so on once more. Another job is the manual marker of the ROI. Care must be precisely that is truly merely marks the object without the background. For complicated forms of objects can be really complex, which is the method more suited for simple forms such as rectangular or unit of ammunition objects.

    Another major drawback of the duplicate procedure is the deficiency of rating of colour information, since the theoretical account contains merely the lineations and no colour information.

    In the application tested in the work of the sensing rates were non really good. Therefore, ever incorrect parts, or non detected ( two illustrations, see Figure 4.12 ) . There were no particular jobs with parts, although auswiesen a similar surface construction, but had a wholly different form. Due to the hapless sensing rates and the aforesaid disadvantages, particularly because of the high figure of object categories to be recognized and the necessary differentiation of colourss, suited matching of the Wizard is non the inquiry of this thesis.

    Figure 4.12:wrong sensings templets

    5th Problem work outing / scheme


    Object acknowledgment are based on a good quality image gaining control. To acquire this, had found a suited light, which allowed images every bit homogenous as possible with small shadow and ever with the same colour reproduction, even at different ambient lighting conditions. Further contemplations of the light beginning had to be avoided in the objects.

    Initially, an effort was made to carry through this with an indirect lighting, floor lamp. However, the light strength is non strong plenty to light the objects and the background good plenty.

    After a long attempt a annular fluorescent tubing was used, which was installed centrally above the camera. Fluorescent tubes supply a bright, unvarying visible radiation and the annulate light provides an intense, about shadow-free light incident along the optical axis of the camera.

    Further trials with different-colored backgrounds ( dark blue, black, ruddy, white ) were performed to prove, in which the best contrast between the background objects and the background could be achieved. The best consequences were achieved with a white background, with contemplation there was larger, but this disadvantage must be accepted.

    The camera was mounted on a tripod to avoid camera shingle caused or “ oblique ” images and the distance between the object and the camera remains the same, so that no unintended expansions or decreases.


    Since the recorded images have a comparatively good quality and can be lost during a filtering information that has been dispensed with preprocessing.


    For the objects in the image to clearly separate from the background, assorted possibilities were tested. The rule was the same in all instances. Using a manually or automatically selected threshold value, the objects were segmented. The available in HALCON operators ( histo_to_thresh, auto_threshold, bin_threshold… ) , where the threshold can be determined automatically, it does take in some instances really good consequences, but they were to a great extent dependent on how the ratio of the seeable background to the entire all seeable objects was. Therefore, the automatic acknowledgment good in either a few or many objects, but ne’er the same for both instances. As a consequence, was so set to the manual accommodation of the threshold value ( threshold ) .

    The biggest job in cleavage were the bright and partly crystalline pa


    I'm Amanda

    Would you like to get a custom essay? How about receiving a customized one?

    Check it out