Robot in Italian vineyard
Photo: Sapienza University of Rome
By Gary Hartley

How will humans and robots work together on the farms of the future?

A few years ago, futurologists might have predicted that a large number of agricultural processes would be fully automated by now. That vision hasn’t transpired, with McKinsey finding in 2022 that fewer than 5% of farmers across Asia, Europe and the Americas were using automated technologies.

The huge variety of externalities involved in the production of our food, from terrain to climate to grappling with plant and animal biology, mean that humans have remained a steadfast part of decision-making and manual tasks.

Yet agriculture’s troubles with finding sufficient labour to maintain or increase productivity remain, and robots and the software behind them have leapt rapidly forward. It’s looking increasingly like ‘agriculture 4.0’ and beyond will be a collaboration between human and machine — but if so, how will the two communicate while planting, harvesting and analysing together?

A team at Sapienza University of Rome is working on it.

Mastering ‘cobot’ communication

Human-robot collaboration, and its intended product of what are known as ‘cobots’, is a relatively fledgling field of research. The Italy-based scientists’ work is looking specifically at how humans and robots will work together in harvesting table grapes and pruning grapevines, as part of the broader EU2020 CANOPIES project, but the essence of communication being explored has much broader implications. 

“We have had autonomous robots doing specific tasks in certain domains for quite a while now, but still, there will be some tasks that might be actually difficult for the robot to handle, and we will still need human supervision. We are talking about the safety of the robot as well as the safety of the human,” explained research scientist Sandeep Reddy Sabbella.

The modes of communication between humans and robots are entirely familiar: speech and gestures from the humans, and speech, light and sound from the robots. From the human-to-robot perspective, speech is considered the primary mode of communication, mimicking the main way humans communicate with each other, while gestures provide something of a complimentary option. When the robot needs to alert humans that it is in the vicinity, lights and sounds provide an additional form of clear communication, on top of speech.

Looking at the task of picking table grapes specifically, an example of when humans can help robots is in adding extra information on top of perceptual sensors, in assessing when fruit is suitable to be harvested or not and deciding which bunches can be successfully harvested by the robot. Working out how best to get such messages across is the challenge the team are wrestling with.

Achieving clarity in the field

A crucial part of robots understanding human speech is being able to pick out the parts which should influence their movement or other action. “We are trying to analyse what are the main semantic roles or entities inside a sentence that could be helpful in order to understand the entire concept,” said research scientist Sara Kaszuba, noting that the framework of this understanding is based on what are known as the speech act theory and frame semantics .

The researchers’ algorithms are first worked through using virtual reality before being deployed in the robot, using a digital twin of the field they work with. Following this virtual work, it’s time for the real world. Here, the growing environment, is where even the best training of machine learning models can be tested by noise, light and shade.

In circumstances where it is too noisy for speech to be understood, the idea is that gestures could compensate, and vice-versa when visibility is difficult — however, there is always the possibility that both noise and visibility will not be in favour of effective communication. But here, too, there are solutions.

“One way that we try to mitigate this particular problem when it’s too noisy and too shady to detect any of the gestures was to provide the users with a headset. While they’re working, even it is noisy, the headset would be able to transfer the information to the robot or the human via speech,” said Sabbella.

Photo: Sapienza University of Rome

Towards optimum solutions

With the individual modes of communication already tested, the next challenge for the team is to ensure that they are all integrated together. Safety is the biggest priority, given that the robot they are working with is of considerable size, and there are likely to be more than one in use at any given time.

“The most innovative aspect of this project is the idea of having multiple robots and multiple humans that can collaborate and interact in the field. When reviewing the literature at the start of the project, there was a lot on autonomous systems, but collaborative robotics in outdoor scenarios was largely missing,” said Kaszuba.

The work in Italy is part of a growing exploration of humans and robots working in tandem. Fruit and vegetable growing is likely to be a prime market for bringing these interactions into practice, and among them, grapes, melons and strawberries have been suggested as the most likely, but there is also considerable scope in livestock farming.

But beyond integration, there are a few other communication issues to iron out. Not least the fact that with English dominating language models for robots, many farm workers will be communicating with robots in a language not their own.

“This can introduce a level of difficulty in comprehension and understanding of what the human is saying. It would be helpful to find a way to consider people’s dialects and accents,” noted Kaszuba.

The road to commercial roll-out

Despite considerable progress made in understanding co-working practices between humans and robots, and language models evolving to become increasingly nuanced, Sabbella does not think it’s likely that such technologies will be commercially available in the immediate coming years. However, the rapid advancements and research in many fields, including Artificial Intelligence, might make it possible by 2028 or 2030.

“Until a few years ago, there were only traditional agricultural techniques, and computer science departments were completely focused on solving singular problems in vision, language and other aspects. It’s only now that we are trying to integrate these things, looking at multi-modal solutions,” he stressed.

Education will also need to be provided alongside the deployment of robots for interactive working, he continued, to ensure farm workers understand both the capabilities and limitations of the machines. 

Robots are coming to the world’s farms, not as swiftly as some would have predicted, and perhaps not as part of a quick shift to automation. This need not be cause for concern; a review of research to date showed that every time humans and robots worked together on farming tasks, they were more effective than one or the other alone.

It’s a story of synergy, that is also seen in the way that scientists traditionally working in silos have found a way to come together and make the biological fit better with the mechanical. Thanks to the hours been put in by academics today, a partially robotic future on the world’s farms can be one based on understanding. 

Share this article...

You might also like...

Share this article...

Leave a Comment

Your email address will not be published. Required fields are marked *

Written by:

Sign up to our newsletter

FFF’s bi-weekly emails are filled with the latest news and information — sign up now to make sure the good stuff reaches your inbox. We promise we won’t send spam.
Subscription Form
Farming Future Food