Researchers at Newcastle University, U.K. believe they’ve discovered a differently evolved form of stereo vision in mantises. The research team studied the phenomenon in the insects precisely as one would hope — by attaching a pair of tiny 3D glasses to their bug eyes.
The scientists attached a mantis-sized pair of dual-color 3D glasses to the insects’ eyes, using beeswax as temporary adhesive. The team then showed video of potential prey, which the mantises lunged at. In that respect, the bugs appeared to approach 3D image processing in much the same way humans do.
But when the insects were shown dot patterns used to test 3D vision in humans, they reacted differently. “Even if the scientists made the two eyes’ images completely different,” the university writes, describing its findings, “mantises can still match up the places where things are changing. They did so even when humans couldn’t.”
According to the university, the discovery of stereo vision makes the mantises unique in the insect world. The manner of 3D vision is also different than the variety developed in other animals like monkeys, cats, horses, owls and us. In this version of the trait, the mantises are matching motion perceived between two eyes, rather than the brightness we use.
“We don’t know of any other kind of animal that does this,” Dr. Vivek Nityananda told TechCrunch. “There’s really no other example or precedent we have for this kind of 3D vision.” Nityananda adds that this sort of 3D vision has been theorized in the past, but the team believes that this is the first time it’s been detected in the animal kingdom.
The scientists believe the system was developed in a way that’s much less complex than our version of 3D vision, in order to be processed by the mantises’ less complex brains. That, Nityananda says, could be a boon for roboticists looking to implement 3D systems into less complex and lighterweight machines.
“It’s a simpler system,” he says. “All mantises are doing is detecting the change in the relevant position in both eyes. Detection of change would be much easier to implement [in robotics], versus the more elaborate details in matching the views of two eyes. That would require much less computation power, and you could put that into perhaps a much more lightweight robot or sensor.”
Featured Image: Mike Urwin/Newcastle University, UK