Using LED Lights in High-Speed Imaging Applications

Hello, I’m Matt Pinter and welcome to this blog which describes how to use LED illumination in high-speed machine vision applications.

As many of you are aware, the introduction of CMOS-based images sensors has led to the increased use of high-speed cameras in machine vision applications. Many commercial cameras are now available both as stand-alone cameras and those that are interfaced to computers using high-speed interfaces such as 10GigE, CameraLink and CoaXPress interfaces to a host computer.

While stand-alone high-speed cameras have the advantages that they are portable, those that are interfaced to host computers are often lower in cost, making them more cost-effective for machine vision applications. Both types of cameras, however, use CMOS image sensors running at high-speed. To increase the shutter speed of the camera, manufacturers have taken advantage of the windowing mode of such CMOS sensors to increase the frame rate of their cameras. For example, a camera using a 2048 x 1536 sensor for example may run at 100fps in full resolution but at speed of 500,000fps  frames the number of pixels captured will be reduced to 672 x 24.

When using such cameras for high-speed imaging, the amount of blur in the captured image must be reduced. This pixel blur depends on the speed that the part to be inspected is moving, the image size, the field of view (FOV) of the camera and its exposure time. To calculate the pixel blur, the following formula can be used: Blur in Pixels = (Line speed*Exposure time)*(Image size/FOV)

For a pixel blur of 2 pixels, for example, a line speed of 100mm/s, an image size of 640 x 480 and an FOV of 150mm, the exposure time will be 4.6ms. To help you calculate this pixel blur, Smart Vision Lights has developed a technical note that can be found here.

While increasing the FOV of the camera results decreases the captured resolution of objects in the image, reducing the size of the captured image using ROI techniques affects the obtainable resolution for a given FOV. And although widening the aperture of the camera results in more light being captured, the result will be a smaller depth of field. Similarly, increasing camera gain makes the camera outputs brighter but at the same time amplifies the noise within the image.

Thus, one of the most effective ways to reduce the exposure time of the camera is to increase the amount of light used to illuminate the scene. To do so, developers can employ bright light sources such as Xenon or LED lamps operating in strobed mode. While Xenon light sources can produce millions of Lumens of light for short microsecond periods, many machine vision applications do not require microsecond flash durations and, for those systems that can perform in the 50-300μs range, LEDs can outperform Xenon strobes. Thus, strobing the LED light at hundreds or thousands of strobes/s, LEDs can outperform Xenon lamps in applications where ultra-fast (10μs) pulses are not required.

To produce extreme light intensity, LEDs can be overdriven. This involves pulsing the device at very high currents for a short time and then turning the light off for a specific rest time. To do so, many manufacturers have developed strobe controllers that can drive LED lights with currents of 10A or 20A. However, increasing the current by a specific amount will not increase light output by that same amount and light output may decrease as the temperature of the LED increases.

As a rule, Smart Vision Lights strobes LED lamps at 4x the rated current – a function that limits the duty cycle of the LED light. Smart Vision Lights OverDrive series of lights, for example, include an integrated strobe driver, that features a 10% duty cycle. Thus, when the light is strobed for 1ms, the LED is turned off for 100ms before the next strobe is activated. In machine vision applications, this is acceptable since the time taken to capture and process the image and perform the system’s I/O functions all need to occur before the light is again activated.