Guide to using the pose estimation code¶
This guide consists of 4 consecutive parts:
- Get Scene
- Generate Templates
- Set Region of Interest (ROI)
- Run action_node
The first 3 tasks is the setup procedure which is split into 3 different tabs. The setup of a new pose estimation is done continuously, with each tab. The 3 tabs are shown in the following image, and their contents will be elaborated in the following sections. Finally, the command to run an action node in ROS is shown.
To perform the full setup, one simply has to complete the tasks in each window and then move to the next window.
1. Find Chessboard Position¶
Selecting the table position
The first task is to define the scene that the object will be placed in. This is done by placing a chessboard on the “table” were the object is placed.
The number of internal corners is to be defined as Chessboard Pattern, where the largest should be defined first. A chessboard of size 6x4 is shown:
The chessboard corner size should be defined in millimeters along with the distance from the top of the chessboard to the table. By then pressing Calculate Position the chessboard position is determined relative to the “table” plane. Finally by pressing Save Transform As the given scene name will be saved.
2. Select the desired position¶
Choosing the position on the table and generate templates
When the scene has been defined the template matcher needs to be created. To do so the canonical pose of the object should be choosen. The following image is a visualization of the template generation window.
Pressing “Load STL” the desired CAD model is selected and loaded. In the above window the loaded model can be inspected.
In the drop-down list below “Load Scene” the available scenes are shown. When selected you press “Load Scene” to load the scene.
Select Canonical Pose
By pressing “PREV” and “NEXT” you can go through the canonical poses which are visualized in the window to the right. In the progress bar the index of the canonical pose is shown along with the full number of poses.
Select Range of Rotation
When the desired poses have been found the rotation needs to be selected. If the full 360 degree rotation is desired no further changes are necessary. If only a number of rotations are needed, the slider can be used to test this. When a range of rotations have been found the two boxes below can be used to define the range.
Finally the templates can be generated. The last box is used to select the name of the detection and by pressing “BUILD” the detection is generated.
3. Test your created Detection¶
Setting the ROI and testing.
When the templates have been created the final task is to define the ROI to restrict the search. Pressing Adjust ROI will popup a window, where left clicking selects the ROI. First selecting the upper-right and then lower-left corners, pressing ESC will save and quit the setup. An example with the ROI around a remote control is shown in the following figure ( green is upper right and blue is lower left):
When this have been completed the system can be tested and is ready to be used. Pressing TEST will verify the detection.
4. Run action node¶
In order to run your detection you need to start the action_node with the previously defined json file for the detection. See last section to see the nodes.
Example of starting the action function:
$ rosrun template_matching action_node _detection_file:="/home/reconcell/.reconCellVision/templateMatching/detection/WEST_BAS2_RECT_TEST.json" __name:="ivamax"
To call this action function the “Detection.action” in vision_msgs has been implemented. An example of calling this can be seen in “vision_msgs/scripts/dummy_action.py”.
Running the camera¶
To run the camera the following commands can be called. camera_name is the name that you set for the camera. basler_serial is the serial code for the camera. calibrated is a boolean, indicating whether you have calibrated the camera.
$ roslaunch caros_camera caros_camera.launch camera_name:="basler_2" basler_serial:="22037345" calibrated:="1"
$ roslaunch caros_camera caros_camera.launch camera_name:="basler_1" basler_serial:="22084405" calibrated:="1"
To calibrate the camera you can use the guide from the: Calibration Manual
Or call the ros node, e.g.,:
$ rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.39979 image:=/basler/caros_camera/left/image_raw camera:=/basler/caros_camera/left
The second method is generally faster to setup, but a more precise calibration is achieved by the first method, the first method also requires MATLAB.
Installing the directory from GIT:¶
Go into your catkin source directory and clone from Git:
$ cd catkin_ws/src
$ git clone https://gitlab.com/fhagelskjaer/canonicalPoseTemplateMatching.git
$ cd canonicalPoseTemplateMatching
$ git clone https://gitlab.com/fhagelskjaer/offScreenRender.git
$ cd ..
Call the script to set up the folder structure by:
$ python setup.py
Then go back to the catkin directory:
$ cd ../..
And the directory can be compiled with:
Start the camera (see and call the setup client:
$ rosrun template_matching template_matching_node
The node for setting up the object localization
image_rect (sensor_msgs/Image) camera_info (sensor_msgs/CameraInfo)
Generated json files¶
- a file describing the plane parameters
- a file describing the detection
here the path to the previously defined “.json” file for the detection must be given.
This the name of the node and thus the action that will be called when using the pose estimation. It is recommended to rename this node in order to enable running several instances of this node for different detections.
Here the prefix is the prefix for the detection files to be used for the detector. cameraName is the camera used. It is important that the camera for prefix and cameraName match.
It is recommended to rename this node in order to enable running of several instances of this node for different cameras.