Adding perception to motion planning
In the previous section, we learned how to plan and execute trajectories for our robot using programs. However, we weren't taking into account perception, were we?
Usually, we will want to take the data from a 3D vision sensor into account; for instance, from a Kinect camera. This will give us real-time information about the environment, which will allow us to plan more realistic motions, introducing any changes that the environment suffers from. So, in this section, we are going to learn how we can add a 3D vision sensor to MoveIt in order to perform vision-assisted motion planning!
Getting ready
First of all, let's make some changes to the current simulation so that we are able to work with perception better. We will create a new file named table.urdf
in our workspace and add the following code into that file:
<robot name="simple_box"> <link name="my_box"> <inertial> <origin xyz="0 0 0.0145"/> <mass value...