A research team from the University of Bristol have exhibited a new special type of camera that can build a pictorial map of where it has been and use this map to detect where it currently is. A feat that would be very useful in the development of robotics, smart sensors and driverless cars.
The introduction of maps and GPS proved a major hit as it became easier for travellers to find their way, find restaurants etc. It enabled people plan their next stop and even track where they’ve been.
This newly developed camera is essential for smart devices from robot vacuum cleaners to delivery drones to wearable sensors keeping an eye on our health as it would have a memory of everywhere it has been, one downside to it however is it’s dependence on external GPS that do not work indoors.
Professor Walterio Mayol-Cuevas, professor in robotics, Computer Vision and Mobile Systems at the University of Bristol’s Department of Computer Science, who led the team that has been developing this new technology said “We often take for granted things like our impressive spatial abilities. Take bees or ants as an example. They have been shown to be able to use visual information to move around and achieve highly complex navigation, all without GPS or much energy consumption.
“In great part this is because their visual systems are extremely efficient and well-tuned to making and using maps, and robots can’t compete there yet.”
A new breed of sensor-processor devices that the team calls Pixel Processor Array (PPA), allow processing on-sensor solved this. This means that as images are sensed, the device can decide what information to keep, what information to discard and only use what it needs for the task at hand.
An example of such PPA device is the SCAMP architecture that has been developed by the team’s colleagues at the University of Manchester by Piotr Dudek, Professor of Circuits and Systems from the University of Manchester and his team. This PPA with a small processor for every pixel that allows for massively parallel computation on the sensor itself.
The team at the University of Bristol previously demonstrated how these new systems recognizes objects at thousands of frames per second but the new research shows how a sensor-processor device can make maps and use them, all at the time of image capture.
This work was part of the MSc dissertation of Hector Castillo-Elizalde, who did his MSc in Robotics at the University of Bristol. He was co-supervised by Yanan Liu who is also doing his Ph.D. on the same topic and Dr. Laurie Bose.
Hector Castillo-Elizalde and the team developed a mapping algorithm that runs all on-board the sensor-processor device.
The algorithm is deceptively simple: when a new image arrives, the algorithm decides if it is sufficiently different to what it has seen before. If it is, it will store some of its data, if not it will discard it.
As the PPA device is moved around by for example a person or robot, it will collect a visual catalogue of views. This catalogue can then be used to match any new image when it is in the mode of localisation.
Importantly, no images go out of the PPA, only the key data that indicates where it is with respect to the visual catalogue. This makes the system more energy efficient without breaching privacy of it’s user.
The team believes that this type of artificial visual systems that are developed for visual processing, and not necessarily to record images, is a first step towards making more efficient smart systems that can use visual information to understand and move in the world. Tiny, energy efficient robots or smart glasses doing useful things for the planet and for people will need spatial understanding, which will come from being able to make and use maps.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.