||Hello, the problem statement specifies that in the final round each of the top 10 competitors would be subject to 3 tests:
1) validation on the available test set,
2) testing on a new set of images, and
3) validation and scoring of steps 1) and 2).
My question is whether the testing/scoring will be performed using the already-runnable model submitted on the docker file (already trained) OR the model trained from scratch (train.sh script) as per the requirements of the fMoW Final Testing Guide.