Machine Vision (MV) Research

1980 1990 2000 2010
Emerginbg of PCs Information superhighway Human-center healthcare robotics
[Motivation] [Concepts] [Applications] [Color] [Challenges] [Reference]

Motivation: Some questions we asked in the 1980’s, which have motivated the MV research at the AIMRL ...

  • Question #1: Why a color video camera costs only US$1,500 but a gray-level machine vision system costs US$50,000?
  • Question #2: Why do the we need 0.5 second to compute the center of a small blob?
  • Question #3: We have long noticed that a human visual system (HVS) functions remarkably well even in the presence of significant noises. Can we effectively emulate some or all the HVS features to improve machine vision for automating the tasks of visual inspection?

Design Vision for Machines (Late-1980s to mid-1990s):

Many industrial tasks require sophisticated vision interpretation, yet demand low-cost, high speed, accuracy, and flexibility. To be fully effective, MV systems must be able to handle complex industrial parts. This includes verifying or recognizing incoming parts, and determining the location and orientation of the part within a short cycle time. Until 1990s, most MV systems were based on TV video standard (RS170) defined for human perception. Lee and Li [1]  offered the design concept of retror-eflective vision sensing (RVS) for low-cost vision-based automation applications. Problems associated with conventional video systems for MV applications were also identified in [2-4]. Dickerson and Lee [5] developed a digital camera without the limitation of the TV video standard  (commercialized by DVT and now Cognex)at Georgia Tech, which offers performance and cost advantages by integrating the imaging sensor, control, illumination, direct digitization, computation, and data communication in a single unit. These research findings have led to a widespread use of low-cost, low-power LED’s in vision-based products/equipment and a major impact in industry. (Top-left collocated RVS system; bottom-left: FIVS; right: early DVT sensor)

 

Machine Vision Applications:

The interest to provide 3-DOF noncontact orientation feedback for a ball-joint-like motor has provided the motivation to develop a DSP-based Flexible Integrated Vision System (FIVS) [6], which has found several unique control and automation applications:

  • 3-DOF noncontact orientation sensor [7-8]: This real-time vision-based motion control of roll, yaw, and pitch motion in a single joint  eliminates friction, inertia, and backlash often associated with the use of single-axis encoders.
  • Servo-Track-Writing of a hard-disk drive: [9] related the theory of the grating interferometer to machine vision for performing non-contact (nanometer-scale displacement) measurements  of the actuator-arm.
  • Vision-guided robotic part pickup of moving objects on a vibratory conveyor: [10-12] formulated the problem of object handling in the context of prey capture with the robot as a “pursuer” and a moving object as a passive “prey”.
  • Vision-based tracking control of an unmanned vehicle [13]

 

HVS-Inspired Color Design for Machine Vision (1996-2010):

Motivated by the ability of the human to perceive fine gradation of color,  Lee, Li and Daley developed a general method utilizing some properties observed in human visual systems (HVS) to create artificial color contrast (ACC) between features.  They have also extended the principal component analysis (PCA) technique to characterize color-based target features from a set of training data so that features can be more accurately presented in color space and efficiently processed for detection.  In addition, they have successfully combined the ACC method and the PCA technique in developing a color-feature detection algorithm, and demonstrated their effectiveness in food processing applications along with benchmark comparisons with two most commonly used classifiers; neural network classifier (NNC) and support vector machine (SVM).

Through a practical application involving live or natural products where variability in natural objects is usually several orders-of-magnitude higher than that for manufactured goods, [14] illustrates how unrelated features (that are very close to the target in color space) appear as noise and often result in false detection. Here, for the first time the Hering’s theory of opponent colors is quantitatively used in designing color-based vision algorithms.  The ACC method uses the difference of Gaussians with opponent colors to increase separation between two color clusters characterizing the target and noise; this makes the identification more robust by reducing the chance of including noise in the bounded box determined by the PCA technique. As compared to NNC and SVM, the combined ACC/PCA algorithm (that bases on the smallest bound box to characterize the target) has several obvious advantages including simplicity in training and fast classification since only three simple checks of rectangular bounds are performed.   On the other hand, both the NNC and SVM fail badly because their training operates on a garbage-in-garbage-out basis without considering the characteristics of the training set.  This finding is significant in real-world applications since the boundary of the target color subspace can only be constructed from a limited set of training samples. The ACC method and its use with the PCA technique have a spectrum of applications well beyond food processing as the need to discriminate closely similar color-features is common as well as an important task; for example, uncovering camouflage in military.  This need represents more research opportunities as well as challenges.

Challenges:

Evolution of human visual system over many million years has provided the human eye and its higher processing units with incredibly range of light/dark adaption, wide FOV and color perception, which are unparalleled by any single man-made system.  It has been interesting to note that many visual prosthetics for the blinds are based designs of camera for machines in the 1990s; suggesting that realistic emulating a human visual system remains a daunting challenge.     

References:

  1. Lee, K-M. and D. Li, "Retroreflective Vision Sensing for Generic Part Presentation," J. of Robotic Systems. February 1991, vol. 8, no. 1, pp. 55-73
  2. Lee, K-M., "Flexible Part-Feeding System for Machine Loading and Assembly, Part I: A State-of-the-art Survey," Int. J. of Production Economics. 1991, vol. 25, pp. 141-153.
  3. Lee, K-M., "Flexible Part-Feeding System for Machine Loading and Assembly, Part-II: A Cost-Effective Solution," Int. J. of Production Economics. 1991, vol. 25, pp. 155-168.
  4. Lee, K-M., "Design Concept of an Integrated Vision System for Cost-Effective Flexible Part-Feeding Applications," ASME J. of Engineering for Industry. November 1994, vol. 116, pp. 421-428.
  5. Dickerson, S. and K-M. Lee, Image Reading and Processing Apparatus, US Patent 5,146,340 (8 September 1992); Image Reading System, European Patent 0549736 (7 January 1998), and Canadian Patent 2,088,357 (9 May 1998).
  6. Lee, K-M. and R. Blenis, "Design Concept and Prototype Development of a Flexible Integrated Vision System," Journal of Robotic Systems, 11(5), pp. 387-398, 1994.
  7. Lee, K-M., Orientation Sensing System and Method for a Spherical Body, US Patent 5,319,577 (June 7, 1994).
  8. Lee, K-M., R. Blenis, and T.-L. Pao, System and Method for Controlling a Variable-Reluctance Spherical Motor, US Patent 5,402,049 (March 28, 1995); Real-Time Vision System and Control Algorithm for a Spherical Motor, US Patent 5,416,392 (May 16, 1995).
  9. Lee, K-M., H. Garner, and L. Guo, Method and Apparatus for Measuring Angular Displacement of an Actuator Arm relative to a Reference Position, US Patent 6,188,484 (February 13, 2001). Also in ASME Journal of Manufacturing Science and Engineering "Development of a Grating with Application to HDD Servo-Track Writing," August 2001, vol. 123, no. 3, pp. 445-452.
  10. Lee, K-M. and Y. Qian, "Intelligent Vision-Based Part-Feeding on Dynamic Pursuit of Moving Objects ," ASME J. of Manufacturing Science and Engineering (JMSE). August 1998, vol. 120, pp. 640-647.
  11. Lee, K-M. and Y. Qian, "A Vision-Guided Fuzzy Logic Control System for Dynamic Pursuit of Moving Target," Microprocessor and Microsystems, Elsevier Science. 1998, 21, pp. 571-580.
  12. Lee, K-M. and J. Downs, “Vision-guided Dynamic Part Pick-up Learning Algorithm,” Proc. of the 1997 IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics (AIM’97), Tokyo Japan, June 16-20, 1997, pp. 55-60.
  13. Lee, K-M., Z. Zhou, R. Blenis, and E. Blasch, "Real-Time Vision-Based Tracking Control of an Unmanned Vehicle," Mechatronics. October 1995, vol. 5, no.8, pp. 973-991.
  14. Lee, K.-M., Q. Li, and W. Daley, "Effects of Classification Methods on Color-based Feature Detection with Food Processing Applications," IEEE Trans. on Automation Science and Engineering. January 2007, vol. 4, no. 1, pp. 40-51.


Professor Kok-Meng Lee
George W. Woodruff School of Mechanical Engineering
Georgia Institute of Technology
Atlanta, GA 30332-0405
Tel: (404)894-7402. Fax: (404)894-9342. Email: kokmeng.lee@me.gatech.edu
http://www.me.gatech.edu/aimrl/