The machine-learning based workflowLink
The second workflow we introduce is the one we called machine-learning based workflow as we make use of a trained CNN to predict organ types and locations on our original RGB images. This information is then projected on the point cloud, so we can compute the fruit orientation in space. From there we can fulfill our goal: to estimate the fruits' successive angles and internodes lengths.
The aim of the reconstruction part is to reconstruct a 3D model of the plant, here a point cloud, from a set of RGB images.
As for the geometry based workflow, we sought at combining structure from motion with space carving to obtain a quick and reliable 3D reconstruction of the plant.
However, we added an extra layer of complexity with the prediction of organs types and locations instead of the simple & fast linear filter. This in turn improve the automation level of the reconstruction procedure as the plant is automatically identified in the scene. There is thus no need for manual definition of the scene bounding-box.
- We start with the
Colmaptask to estimate both intrinsic and extrinsic parameters using a structure from motion algorithm.
- The camera intrinsics are used by the
Undistortedtask with a
SIMPLE_RADIALmodel to fix the original RGB images.
- Then the
Segmentation2Dtask performs semantic labelling of the plant organs in each image and create a binary mask for each image and organ type.
- This is later used by the
Voxelstask, in combination with the camera extrinsics (also called camera poses), to perform the space carving of a 3D volume. This reconstructs the volume occupied by the plant in the selected portion of the scene.
- Finally, this is turned into a point cloud describing the envelope of the reconstructed plant structure by the
The aim of the quantification part is estimate the fruits' successive angles and internodes lengths, from the 3D point cloud.
We sought at projecting the CNN predictions about organ types on the 3D point cloud to know the exact position of each fruit. From there we could estimate the fruit directions, thanks to oriented bounding-box for fruits directions and the mean skeleton for the branching point to the main stem.
- We start by projecting the CNN predictions about organ types and locations to the 3D point cloud with the
- From there individualize each organ thanks to the
- Finally, the
AnglesAndInternodestask will compute the fruits direction and branching points, allowing us to estimate the successive angles and internode lengths between the fruits.
ClusteredMesh task is here to generate a labelled triangular mesh that can be visualized, but is not necessary to the previous quantification workflow.
Note that this task could be used in place of the
OrganSegmentation task, and it could be later used by