AI - a tool for optical sensors

Optics & AI

AI - a tool for optical sensors
Biogemma - RATP - Eurotunnel - Moet & Chandon - UPR Champagne

AI - a tool for optical sensors

With 20 years of experience in developing optical measurement systems, at R&D Vision, we have included Artificial Intelligence (AI) in our projects wisely. Over the past few years, Artificial Intelligence and Deep Learning methods have demonstrated that they are able to achieve astounding performance to solve certain issues that are difficult to process using conventional image processing approaches. However, Artificial Intelligence is also often used as an umbrella term covering the use of more or less effective and abstract methods, supposed to adapt to solve all types of issue. At R&D Vision, we have been keeping a critical eye on the technologies built into our systems, remaining focused on both the quality and management of the captured information, as well as the intrinsic meaning of this data so as to process it correctly. It is important to understand the operation and the limits of these powerful methods, but above all to master their integration in a specific vision system, from image acquisition (optics, hardware) to industrialisation in a restricted, complex environment (outside imaging and restrictions linked to trades: rail, logistics, medicine, etc.).

This article introduces four achievements by R&D Vision that illustrate certain important elements we have been able to use to develop high-tech systems with our customers.

Biogemma - Corn kernel counting tool

The first achievement is a corn kernel counting tool for Biogemma; it is one of the first projects where we used deep learning. The system captures 360° high-frequency monochrome images using several synchronised cameras and identifies whether the objects detected by a watershed type algorithm are corn kernels. These objects (kernels or otherwise) correspond to 30x30 pixel thumbnails, which are monochrome and have little texture. The initial approach developed, based on clustering, enabled a 94% detection rate to be obtained. With deep learning, we were able to reach performance of 99%, confirming the significance of using this type of approach for image classification.

Cas Biogemma

Moët & Chandon - optical inspection machine (quality of crates of grapes)

The second achievement is an optical inspection machine used to objectify the quality of cases of grapes by detecting the presence of mould, among other things, on grapes for Moët & Chandon. This system is the result of collaboration spanning several years, enabling the collection of over 100,000 images. The system is operational at a sorting line that operates using lean manufacturing processes during the harvest period. The first two harvests enabled data to be acquired, sorting to be developed via image processing and the reliability and ease of integration of the system to be verified on the sorting line during the production period. The system was industrialised in the third year. We then integrated deep leaning to improve mould detection (over 90%).

To do so, we used a classification network. We took small images of objects deemed to be different to grapes and the network classified these images depending on whether or not they were mouldy.

Data acquisition and annotation were essential. The optical system was improved using polarising filters and directed lighting. To facilitate the annotation phase, the trade experts were guided by semi-automated tools developed by R&D Vision, using the results of image processing previously implemented on several thousand images. Acquisition management and the synergy between conventional methods and deep learning so they all work on the tasks for which they were intended enabled us to reach the desired level of performance.

Mireuse optique R&D Vision - Moët & Chandon

RATP & EUROTUNNEL- Automatic detection of items

The third case involves collaborations with RATP and Eurotunnel, with sensors built into roofs helping to collect several thousand images of the same finely-scanned infrastructure at different times. These large image databases are used to assess and improve the algorithms based on the automatic detection of items (e.g., insulator and clamps) but also to monitor changes to this equipment over time. The portability of these algorithms on on-board platforms is also a topic explored by our technical teams.

UPR Champagne - Optical inspection machine (detection of foreign objects in crates of grapes)

For UPR Champagne, the aim is to detect foreign objects, such as tools or gloves, in a crate of grapes, without making a mistake regarding variations of colours and textures of crates and grape leaves. We developed an image reconstruction model. By learning with “good” crates, the model is able to reconstruct images of crates of grapes but not able to reconstruct images of other objects. By reconstructing the image of a crate with a foreign object, we obtain an image that is different to the original image, enabling defects and foreign objects to be identified via comparison with the initial image. For grapes, around one hundred images are required to learn this model. The system was used at the heart of the sorting system for over 20,000 crates of grapes during the 2020 harvest.

This technology is being improved to detect defects on rigid objects with a constant shape and size. The aim is to automatically detect missing, broken or cracked parts without having to learn them individually. Only around forty images are enough for deep learning with these types of objects.

Mireuse optique - UPR Champagne
Mireuse optique - UPR Champagne

For 20 years, we have focused on the needs of our customers and the quality and meaning of the information harnessed, so as to process it accurately. Using Deep Learning to resolve image-processing issues is not always enough. Mastering the harnessing of information by selecting the correct light/matter interaction mode and creating a synergy between conventional image processing and Deep Learning is essential for resolving the most complex issues, in the most difficult environments (outside, non-controlled or unknown), for multiple sensor applications or detection systems for faults, foreign objects or changes.
We have built up several thousand terabytes of data from the sensors we have developed, deployed and validated in an operational environment (monochrome images, colours, thermal, terahertz, velocity fields, temperatures, etc.). This raw data is enhanced using image processing to provide a decision-making aid for operators. These results, which are intrinsically annotated, are currently used by R&D Vision with the suitable AI technologies available to provide future onboard sensors that are even more autonomous and smart.

For more information, please email us at