Research Seeks to Develop Reliable Robotic Teammate for Soldier

Robotic Teammate

Researchers from the Army Research Laboratory (ARL) and the Robotics Institute, the Carnegie Mellon University, have formulated a new method of teaching robots new traversal behaviors quickly without much human oversight. The method is making mobile robots capable of navigating independently in a number of situations and environments and doing activities that are expected of the Robotic Teammate in a certain situation.

According to the researchers, one of the goals of the team behind research in the field of autonomous systems is to be able to develop reliable independent robotic teammates for soldiers. If a robot can be a teammate, a number of responsibilities can be undertaken in a much faster way and better situational awareness can be gained. Moreover, autonomous robotic teammates could also take up tasks related to initial investigation in potentially risky scenarios otherwise undertaken by human soldiers, thus stopping further harm to soldiers.

For this, the researchers state that the robot needs to be capable of using the learned intelligence so as to perceive, make decisions, and reason. The robot in the research was taught about navigating from a number of points in the environment while also remaining towards the edge of the road and also traversing secretly by using buildings for cover. Researchers suggest that when it comes to different tasks, the most apt learned traversal mode can be operated while operating the robot.

The researchers can achieve Robotic Teammate with the help of the principle of inverse optical control, also often called inverse reinforcement learning, which is a machine learning model that triggers a reward function in a scenario that has been defined or known as the optimal policy. The difference in this case is that a human shows the optimal policy by driving a robot along a route that is the best representative of the behavior that needs to be learned. The route is then related to the visual object features/terrain, such as roads and buildings, grass, to understand a reward function accordingly for the environment features.

Leave a Reply