Le 03/06/2020
Yield calculation for the Smart Vineyard project
This project was presented by Capgemini- Sogeti during the FIRA 2019 within the scientific seminar organised by Robagri.
Capgemini - Sogeti has been undertaking an experimental project for over a year to demonstrate the interest of its IoT CBioTS platform for monitoring complex systems such as vineyards.
The vineyard of Château-Talbot in Bordeaux is collaborating in this project and suggests that we meet two requirements : disease detection and yield calculation. We address this second requirement here.
For a vineyard like Château-Talbot, the calculation of yield per plot, and therefore per grape variety, is the first link in the evaluation of the wine value chain.
This assessment is all the more important when considering the life cycle of the wine through the blending and assemblage obtained from productions spanning several years. At present, the performance evaluation is carried out, on the one hand, using chemical analysis of samples from the annual production for the qualitative part, and on the other hand, through a quantitative evaluation of the annual production per plot conducted by the technical expert of the vineyard, with a margin of error of about 10% to 15%. Our solution offers to respond more precisely to the calculation of the yield, with the intent of achieving a reliability rate higher than 90%. To that end, we propose to count the number of grapes per bunch and per plot using techniques and technologies for the recognition of objects in photos taken in natural conditions and without specialized shooting devices.
In addition, our solution is designed to be used embedded in robots traveling through plots. Indeed, according to experts, an estimate of the grape count for yield calculation would be less reliable if it is made from plot sampling than if it is made from a comprehensive counting from each of the bunches of the plots.
And finally, we want our solution to be embedded to meet the second requirement: the disease detection, or even prediction of the occurrence of certain diseases through the detection of certain insects that are vectors of these diseases, occurrence correlated with environmental data reported on the CBIoTS platform by various sensors placed on the plots.
For our experimentation, between September 10 and 20, 2018, we took some 2000 photos of bunches of three grape varieties: Cabernet Sauvignon, Merlot and Petit-Verdot. These photos were taken with standard cameras and without any special stability or lighting enhancement device. These conditions were decided on purpose, in order to validate the performance of the recognition techniques and technologies available on the shelf and/or that we would be able to develop ourselves.
Given the total masking of grape, inside the bunch or the total covering by environmental elements (leaves, branches, etc.), our algorithms must recognize more than 98% of the grapes detectable in a bunch photo, to reach our objective of 90% reliability for the calculation of yield.
Over the past decade, many image processing and deep learning methods have been developed for tasks related to counting objects in general and fruit in particular. However, shooting conditions strongly influence the counting accuracies. Camera stability, lighting conditions, overlays, shadows, reflections and other noises, such as certain pebbles that can be mistaken for grapes, are all factors that greatly diminish the effectiveness of the most efficient algorithms like Mask R-CNN and its derivatives. Indeed, the application of these algorithms on our photos does not exceed 60% of grape recognition rate.
To achieve our goal, we have developed a complete in-house recognition process. This process combines the most recent advances in deep learning as well as innovative strategies for image processing chains, prior to the application of parallelized models (neural networks), and for the fusion chains of the outputs of these models.
This work has allowed our recognition rates to be slightly above 95% at present, for a learning base of less than 100 photos. We will easily reach 96% - 97% by simply increasing our learning base. We already know that we can reach our goal of 98%, or even more, by improving parallelism and sequencing the mergers of model outputs, hence the publication of these results.
Our method can be easily generalized to other fruit crops. Much more broadly, with minor modifications to our process, any type of object can be counted from photos taken under natural conditions and without specialized device. This allows us to consider now the detection and counting of anomalies on leaves or fruit, symptoms of a disease, or the detection and counting of certain disease-carrying insects. We just need the robots that carry our technology...