| Duckietown Challenges | Home | Challenges | Submissions |
(No description.)
These are the metrics defined:
driven_lanedir_consecThis is the median distance traveled, along a lane. (That is, going in circles will not make this metric increase.)
This is discretized to tiles.
survival_timeThis is the median survival time. The simulation is terminated when the car goes outside of the road or it crashes with an obstacle.
deviation-center-lineThis is the median lateral deviation from the center line.
in-drivable-laneThis is the median of the time spent outside of the drivable zones. For example this penalizes driving in the wrong lane.
Depends on successful evaluation on LF 🚗 - Lane following (simulation 👾, testing 🥇)
The submission must first pass the testing.
The sum of the following tests should be at least 2.0.
Test on absolute scores:
good_enough(1.0 points) driven_lanedir_consec_median.Test on relative performance:
better-than-bea-straight(1.0 points) straight.At the beginning execute step eval0.
If step eval0 has result success, then execute step eval0-visualize.
If step eval0 has result failed, then declare the submission FAILED.
If step eval0 has result error, then declare the submission ERROR.
If step eval0 has result success, then execute step eval0-videos.
If step eval1 has result success, then execute step eval1-videos.
If step eval2 has result success, then execute step eval2-videos.
If step eval0 has result success, then execute step eval1.
If step eval0-visualize has result failed, then declare the submission FAILED.
If step eval0-visualize has result error, then declare the submission ERROR.
If step eval1 has result success, then execute step eval1-visualize.
If step eval1 has result failed, then declare the submission FAILED.
If step eval1 has result error, then declare the submission ERROR.
If step eval1 has result success, then execute step eval2.
If step eval1-visualize has result failed, then declare the submission FAILED.
If step eval1-visualize has result error, then declare the submission ERROR.
If step eval2 has result success, then execute step eval2-visualize.
If step eval2 has result failed, then declare the submission FAILED.
If step eval2 has result error, then declare the submission ERROR.
If step eval2-visualize has result success, then declare the submission SUCCESS.
If step eval2-visualize has result failed, then declare the submission FAILED.
If step eval2-visualize has result error, then declare the submission ERROR.
eval0Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval0-evaluator:2020_10_19_20_42_41@sha256:1f09bd03108a0093537ddb512198ee60867fa786a02fbbe4e506250c06801e15 environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 1 |
| AIDO 2 Map LF public | 1 |
eval1Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval1-evaluator:2020_10_19_20_43_01@sha256:1f09bd03108a0093537ddb512198ee60867fa786a02fbbe4e506250c06801e15 environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 1 |
| AIDO 2 Map LF public | 1 |
eval2Timeout 18000.0
Evaluation in the robotarium.
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval2-evaluator:2020_10_19_20_43_03@sha256:1f09bd03108a0093537ddb512198ee60867fa786a02fbbe4e506250c06801e15 environment: {} ports: - 8005:8005
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| # Duckiebots | 1 |
| AIDO 2 Map LF public | 1 |
eval0-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval0-visualize-evaluator:2020_10_19_20_43_04@sha256:cf6aa8b37c0c08cb9f5919f9672391421c35abab7ff4b94801e22b4a06287f18 environment: STEP_NAME: eval0
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval1-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval1-visualize-evaluator:2020_10_19_20_43_18@sha256:cf6aa8b37c0c08cb9f5919f9672391421c35abab7ff4b94801e22b4a06287f18 environment: STEP_NAME: eval1
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval2-visualizeTimeout 1080.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval2-visualize-evaluator:2020_10_19_20_43_19@sha256:cf6aa8b37c0c08cb9f5919f9672391421c35abab7ff4b94801e22b4a06287f18 environment: STEP_NAME: eval2
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval0-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval0-videos-evaluator:2020_10_19_20_43_19@sha256:ede116f39aa7600b73a84916f2249fc2238d222840a876942b79ba81062469b2 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval0/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval1-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval1-videos-evaluator:2020_10_19_20_44_54@sha256:ede116f39aa7600b73a84916f2249fc2238d222840a876942b79ba81062469b2 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval1/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |
eval2-videosTimeout 10800.0
This is the Docker Compose configuration skeleton:
version: '3' services: evaluator: image: docker.io/andreacensi/aido3-lf-real-validation-eval2-videos-evaluator:2020_10_19_20_44_54@sha256:ede116f39aa7600b73a84916f2249fc2238d222840a876942b79ba81062469b2 environment: WORKER_I: '0' WORKER_N: '1' INPUT_DIR: /challenges/previous-steps/eval2/logs_raw OUTPUT_DIR: /challenges/challenge-evaluation-output DEBUG_OVERLAY: '1' BAG_NAME_FILTER: autobot,watchtower OUTPUT_FRAMERATE: '7'
The text SUBMISSION_CONTAINER will be replaced with the user containter.
| Cloud simulations | 1 |