| Duckietown Challenges | Home | Challenges | Submissions |
(No description.)
These are the metrics defined:
driven_lanedir_consecThis is the median distance traveled, along a lane. (That is, going in circles will not make this metric increase.)
This is discretized to tiles.
survival_timeThis is the median survival time. The simulation is terminated when the car goes outside of the road or it crashes with an obstacle.
deviation-center-lineThis is the median lateral deviation from the center line.
in-drivable-laneThis is the median of the time spent outside of the drivable zones. For example this penalizes driving in the wrong lane.
Depends on successful evaluation on LFVI 🚗🚗🚦 - Lane following + Vehicles + Intersections (simulation 👾, testing 🥇)
The submission must first pass the testing.
The sum of the following tests should be at least 2.0.
Test on absolute scores:
good_enough(1.0 points) driven_lanedir_consec_median.Test on relative performance:
better-than-bea-straight(1.0 points) straight.Depends on successful evaluation on LFV 🚗🚗 - Lane following + Vehicles (robotarium 🏎, validation 🏋)
The submission must first pass the LF real.
The sum of the following tests should be at least 2.0.
Test on absolute scores:
good_enough(1.0 points) driven_lanedir_consec.Test on relative performance:
better-than-bea-straight(1.0 points) straight.At the beginning execute step eval0.
If step eval0 has result success, then execute step eval0-visualize.
If step eval0 has result failed, then declare the submission FAILED.
If step eval0 has result error, then declare the submission ERROR.
If step eval0 has result success, then execute step eval0-videos.
If step eval1 has result success, then execute step eval1-videos.
If step eval2 has result success, then execute step eval2-videos.
If step eval0 has result success, then execute step eval1.
If step eval0-visualize has result failed, then declare the submission FAILED.
If step eval0-visualize has result error, then declare the submission ERROR.
If step eval1 has result success, then execute step eval1-visualize.
If step eval1 has result failed, then declare the submission FAILED.
If step eval1 has result error, then declare the submission ERROR.
If step eval1 has result success, then execute step eval2.
If step eval1-visualize has result failed, then declare the submission FAILED.
If step eval1-visualize has result error, then declare the submission ERROR.
If step eval2 has result success, then execute step eval2-visualize.
If step eval2 has result failed, then declare the submission FAILED.
If step eval2 has result error, then declare the submission ERROR.
If step eval2-visualize has result success, then declare the submission SUCCESS.
If step eval2-visualize has result failed, then declare the submission FAILED.
If step eval2-visualize has result error, then declare the submission ERROR.
eval0Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval0-evaluator:2020_11_05_22_39_50@sha256:d88423628a241db09806e24672973196f1b49d17238ec729a648b81ba31ddc5e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 2 |
| AIDO 2 Map LFVI public | 1 |
eval1Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval1-evaluator:2020_11_05_22_40_18@sha256:d88423628a241db09806e24672973196f1b49d17238ec729a648b81ba31ddc5e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 2 |
| AIDO 2 Map LFVI public | 1 |
eval2Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval2-evaluator:2020_11_05_22_40_54@sha256:d88423628a241db09806e24672973196f1b49d17238ec729a648b81ba31ddc5e environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 2 |
| AIDO 2 Map LFVI public | 1 |
eval0-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval0-videos-evaluator:2020_11_01_23_51_11@sha256:a4e3115c85b169d63efbcc86d9ada6e23e7802cb523227458b345583a35a9942 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval0/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval1-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval1-videos-evaluator:2020_11_01_23_51_11@sha256:a4e3115c85b169d63efbcc86d9ada6e23e7802cb523227458b345583a35a9942 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval1/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval2-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval2-videos-evaluator:2020_11_01_23_51_11@sha256:a4e3115c85b169d63efbcc86d9ada6e23e7802cb523227458b345583a35a9942 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval2/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval0-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval0-visualize-evaluator:2020_11_05_22_41_19@sha256:70887134d5412746f98523f7434912a63331d0e6ddcf93283ced07a2af58c345 environment: STEP_NAME: eval0
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval1-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval1-visualize-evaluator:2020_11_05_22_41_43@sha256:70887134d5412746f98523f7434912a63331d0e6ddcf93283ced07a2af58c345 environment: STEP_NAME: eval1
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval2-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido5-lfvi-real-validation-eval2-visualize-evaluator:2020_11_05_22_42_07@sha256:70887134d5412746f98523f7434912a63331d0e6ddcf93283ced07a2af58c345 environment: STEP_NAME: eval2
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |