How-to create a new ROMI evaluation taskLink
In order to evaluate the reconstruction and quantification tasks accuracy, we offer the possibility to also create evaluation tasks. The idea is to use a digital twin to generates the expected outcome of a task and use it as ground truth to challenge the reconstruction task.
To do so, you will have to create two tasks:
- ground truth task: it should generate the expected outcome of the evaluated task from the digital twin;
- evaluation task: it will compare the output of the evaluated task against the ground truth.
For example, the Voxels task has a VoxelGroundTruth task and a VoxelEvaluation task.
Ground truth taskLink
Ground truth tasks should be defined in plant-3d-vision/plant3dvision/tasks/ground_truth.py.
It should inherit from RomiTask and define a run method exporting the ground truth later use as reference in the evaluation task.
Warning
Do not forget to reference the task in romitask/romitask/modules.py.
Evaluation taskLink
Evaluation tasks should be defined in plant-3d-vision/plant3dvision/tasks/evaluation.py.
The evaluation task that you will write should inherit from EvaluationTask that defines:
- the
requiresmethod to use anupstream_taskandground_truth; - the
outputmethod to create the corresponding evaluation dataset - the
evaluatemethod that you should override; - the
runmethod that callevaluateand save the results as a JSON file.
Warning
Do not forget to reference the task in romitask/romitask/modules.py.