Adversarial Attacks on Optimization based Planners
Trajectory planning is a key piece in the algorithmic architecture of a robot. Trajectory planners typically use iterative optimization schemes for generating smooth trajectories that avoid collisions and are optimal for tracking given the robot’s physical specifications. Starting from an initial estimate, the planners iteratively refine the solution so as to satisfy the desired constraints. In this paper, we show that such iterative optimization based planners can be vulnerable to adversarial attacks that force the planner either to fail completely, or significantly increase the time required to find a solution. They key insight here is that an adversary in the environment can directly affect the optimization function of a planner. We demonstrate how the adversary can adjust its own state configurations to result in poorly conditioned eigenstructure of the objective leading to failures. We apply our method against two state of the art trajectory planners and demonstrate that an adversary can consistently exploit certain weaknesses of an iterative optimization scheme.