Researchers at University of California, Berkeley, have recently developed a robotic learning technology, which allows the robot to visualize the future of their actions, with which they would be able to figure out how to hand any object that they have never seen before. This technology is expected to aid self-driving cars in the near future by enabling them to anticipate the future events going to happen on the road. It will also help in producing more intelligent robots for assistance in homes over the next few years; however, the initial prototype is especially focused on learning simple and easy manual skills completely from autonomous play.
With the help of this technology, named as visual foresight, robotic assistants can predict what can happen if they carry out a specific sequence of movements. As for now, these robotic imaginations are relatively simple, since the predictions are made only a few seconds into the future, but they are more than enough for robots to determine how to move specified objects around on a counter without disturbing other things.
Incidentally, the robot can learn to carry out these operations without any assistance from humans or a prior information about physics, or the object, or its environment. This is simply due to the visual imagination is learnt completely from scratch from unsupervised and unattended exploration, where robots play with objects put on a table. Post this play phase, robots construct a predictive prototype of the surrounding, which is then used to influence new objects that it might not have seen before.