Pl@ntNet is a research and a citizen science platform, initially supported by Agropolis Foundation, and developed since 2009 within the framework of a consortium bringing together four French research organizations (Cirad, Inria, Inrae and Ird). It is based on a multi-disciplinary team at the frontier of computer science and life sciences, with engineers and researchers in computer science, botanists and ecologists. This team has benefited from the support of numerous partners worldwide, including members of educational and environmental NGOs, botanical gardens, herbaria, universities, local authorities and research organisations interested in biodiversity conservation or AI research applied to conservation.
1) How would you describe Pl@ntnet?
Pl@ntNet is a participatory research and educational platform for producing, aggregating and disseminating botanical observations. Initiated in 2009, it is based on a web and mobile computational infrastructure, allowing the identification of plants by means of automatic visual recognition. Identification performance depends on the volume and quality of visual data used for training the deep learning model of the identification service.
The identification service is updated monthly based on new data produced, shared and validated by the network of participants (i.e. users who have created an account and become members of the community). Pl@ntNet mainly concerns wild plants (i.e. propagating spontaneously in the natural environment) but also cultivated plants (of agronomic and horticultural interest). The visibility and use of this platform has accelerated since February 2013, after deployment on mobile devices (iOS in 2013), and Android in 2014.
Since 2013, the number of daily users has doubled every year, reaching more than 550,000 users per day at some peaks in 2021. In total, more than 32 million people (among which 2.8 million have created a user account) have used the application worldwide (available in 36 languages). Open access, stability over time, continuous improvement and accessibility without personal authentication have contributed to the popularity of the tool. Citizen scientists have contributed in many different ways (e.g. by producing and curating data, providing training) to adapt the platform to specific needs.
2) How did your team come up with the name for the app and website?
The name comes from the idea of creating a network (“Net”) of interested people to jointly produce new data and knowledge about plants (“Pl@nt”). It is funny to note that the notion of “network” was initially specific to human networks, but that more than 5 years after the start of the project, we switched to a technology based on artificial neural networks. The term “net” is thus more widely used today in the scientific community to refer to Convolutional neural networks (“CNNs”), than to human networks. It was also a term as easily understandable in French as in English.
3) How does the app work? Is it more accurate and easier to use the app or the website, or is it about the same ease and convenience?
A user submits, through the web or mobile interfaces, one or several images (up to 4 images max.) of a plant that he/she is observing and wishes to determine. In a few seconds, the user receives a list of the most probable species, according to his request. Each species in the result is presented with a confidence score on the proposed names, as well as the most similar images to his/her query that we have available.
The user can then, if he/she has a user account (free of charge), share his plant observation with the community of users, who will be able to validate or invalidate his/her species name proposal and evaluate the visual quality of the images he/she has shared. If the observation is validated, it will be added to the image database used to train the species recognition model, which is regularly updated.
The mobile version is used by more than 90% of users who can thus determine plants directly in the field. Identification performance is similar, given that the same identification model is used on the web and on mobile. However, since the quality of the identification result depends on the quality of the images submitted, users who do not have good cameras on their smartphones can use a digital camera to get better results on the web version with their high resolution pictures.
3) About how complete is your database, relative to the plants that people are looking for?
Pl@ntNet covers about 36,000 species worldwide. This corresponds to a very large part of the plants that our user network is looking for, however we are far from covering the world flora (which we are working on), which represents nearly 400,000 species. The most important efforts are to be made in tropical regions, which although richer in biodiversity, are often the least known and on which the least data exist.
A recent article in Nature Plants from Nigel Pitman analyses the visual knowledge available, and highlights the need to continue visual documentation of species at large scale.
4) How does the database system work? Are the photographs user contributed, from an authoritative source, or both? If so, what are the sources?
The system is enriched by data imports from validated scientific sources (such as the Encyclopedia of Life, Universities, Botanical gardens, Herbaria, etc.), as well as by the network of users who produce, share and evaluate the quality of the data put online.
5) What made the founders decide to create this app and website? Was there a need they saw, or was it something they had always wanted to create?
The creation of a mobile application was not envisaged at the start of the research project. The initial objective was to advance the techniques, without imagining that they could mature so quickly and be used by so many people. Although we knew that there was a real need to facilitate plant identification, we had not envisaged such a high level of expectation and involvement of civil society in this process.
6) Is your company funded by investors, or is it independent and privately held?
Pl@ntNet is a consortium of 4 French public research organisations (Cirad, Inria, Inrae, Ird), supported by a public foundation (Agropolis Foundation). It is an open consortium to any new institution which would like to participate in the maintenance and development of this scientific, technological and societal adventure.
7) What are the company’s plans for the future? Are there other apps in the works, improvements to this app, etc.?
We are working to enable a more massive use of the application as the volume of daily users is in the hundreds of thousands today, and could one day reach millions of users. Three other important development projects concern : (i) improving identification performance, especially for the least well illustrated species, (ii) providing an embedded mode to enable species identification in the mountains, in forests and in areas of the world where the Internet connection is not good, (iii) covering a greater proportion of the world’s flora, with particular emphasis on the flora of tropical regions
8) Is the use of your app growing by consumers?
Yes, we are seeing a 20-40% growth in the number of daily users, depending on the region of the world, year on year.
9) Is this app mainly for consumer use, or is it targeted more for institutions such as research universities or individual scientists?
The best way to offer a high-performance service for the general public is to ensure that professionals in the management and conservation of biodiversity use it. It therefore seems important to us to meet the needs of professional users working in management, training and research organizations, which will ensure better control of the quality of the data produced, and therefore greater benefit for the scientists who use them, as well as to the civil society which benefits.
10) What is the approximate accuracy of the app? Are there steps users can take to increase the accuracy of the app (such as specific angles and distances from the plants for their photos, etc.)?
The latest measured performance of the recognition model, on 36,659 species, is of 90.7% correct recognition, in the first 5 species returned from an identification result. Users should submit images of the highest possible quality to take advantage of this performance, especially in : (i) centering an organ (leaf, flower, fruit) well isolated in the image, (ii) making sure that the image is clear without any object or finger on the image, (iii) selecting healthy organs, and not degraded ones because too old or damaged by pathogens.
11) Are you planning other apps for the future, or is your company affiliated with any other apps that are currently in production (such as for identifying mushrooms or animals, for example)?
No we made the choice not to invest in other kingdoms, mainly because this was not our initial scientific objective, we do not have biological expertise on these kingdoms, others actors have already made good progress on these other groups.
12) Is there anything your team would like to share with our readers that has not been covered in the questions above?
We encourage the readers, if they use the application and are satisfied with it, to create a user account and share the geolocation of their observations of plants in the natural environment, which will improve the knowledge of the species they observe, and thus give more precise information to enable long-term conservation.