“Join us as we work to give mobile devices a human-scale understanding of space and motion.”
This is how Google describes its project tango on their website.
Google is pretty excited about the project considering that it has now been moved from ATAP (Advanced Technology and Projects group) to Google, having a crunch time of 2 years to make progress.
The basic principle on which it operates is that human beings naturally have the spatial representation but the gadgets we use don’t. Project Tango is all about making our devices aware of the space and motion. A simple example would be that the maps that we use every day don’t only take us to the building but also to the entrance of the main door. Wouldn’t that be amazing? More than that it will help the visually impaired to go to places that they have never been to.
So how does it actually work? It contains hardware and software which will track your device motion and then make a map using the sensors that capture over a quarter million 3D measurements every second. In addition to hardware and software they have depth sensor which will focus on the shape than the color.
For continuously capturing so many images, we need a lot of processing which luckily our mobile phones already have. What we miss is the hardware and software which this project will give. The ATAP team has already built a prototype on a 4 MP camera with high motion sensors that will capture the motion in 3D space.
As quoted by Google’s Vice president of Engineering, ATAP Regina Dugan “You’re going to get a glimpse of a small band of pirates trying to do epic shit”. We sure are considering that this will soon be on our mobile phones and tablets.