Duckietown Challenges Home Challenges Submissions

Submission 4187

Submission4187
Competingyes
Challengeaido3-LF-sim-validation
UserLiam Paull 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
Jobsstep1-simulation: 27675
Next
User labelchallenge-aido_LF-template-ros - Template solution using ROS
Admin priority50
Blessingn/a
User priority50

27675

Click the images to see detailed statistics about the episode.

ETHZ_autolab_technical_track-0-0

ETHZ_autolab_technical_track-1-0

ETHZ_autolab_technical_track-2-0

ETHZ_autolab_technical_track-3-0

ETHZ_autolab_technical_track-4-0

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
27675step1-simulationsuccessyes0:05:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.5728900130190617
survival_time_median3.049999999999997
deviation-center-line_median0.12663250504026446
in-drivable-lane_median0.05000000000000005


other stats
agent_compute-ego_max0.026701339303630674
agent_compute-ego_mean0.025790140037141984
agent_compute-ego_median0.025621636969144223
agent_compute-ego_min0.02502281665802002
deviation-center-line_max0.22820877647883453
deviation-center-line_mean0.14132521348984317
deviation-center-line_min0.08492450847657326
deviation-heading_max0.7204674568635022
deviation-heading_mean0.511274559210195
deviation-heading_median0.46418759639345647
deviation-heading_min0.35358071986381573
driven_any_max1.029569969954737
driven_any_mean0.6843344650560483
driven_any_median0.7833781665871041
driven_any_min0.21586906071648584
driven_lanedir_consec_max0.9241157643596049
driven_lanedir_consec_mean0.5576745090662147
driven_lanedir_consec_min0.19811047289692163
driven_lanedir_max0.9241157643596049
driven_lanedir_mean0.5576745090662147
driven_lanedir_median0.5728900130190617
driven_lanedir_min0.19811047289692163
in-drivable-lane_max1.399999999999995
in-drivable-lane_mean0.32999999999999885
in-drivable-lane_min0
per-episodes
details{"ETHZ_autolab_technical_track-0-0": {"driven_any": 1.029569969954737, "sim_physics": 0.13023406838717527, "survival_time": 3.649999999999995, "driven_lanedir": 0.5728900130190617, "sim_render-ego": 0.013588882472417126, "in-drivable-lane": 1.399999999999995, "agent_compute-ego": 0.026701339303630674, "deviation-heading": 0.46418759639345647, "set_robot_commands": 0.011245260499928095, "deviation-center-line": 0.16420485117826172, "driven_lanedir_consec": 0.5728900130190617, "sim_compute_sim_state": 0.005673607734784688, "sim_compute_performance-ego": 0.008220349272636518, "sim_compute_robot_state-ego": 0.01042148185102907}, "ETHZ_autolab_technical_track-1-0": {"driven_any": 0.21586906071648584, "sim_physics": 0.13756741285324098, "survival_time": 1.0000000000000002, "driven_lanedir": 0.19811047289692163, "sim_render-ego": 0.015101861953735352, "in-drivable-lane": 0, "agent_compute-ego": 0.02502281665802002, "deviation-heading": 0.35358071986381573, "set_robot_commands": 0.010785424709320068, "deviation-center-line": 0.08492450847657326, "driven_lanedir_consec": 0.19811047289692163, "sim_compute_sim_state": 0.004873025417327881, "sim_compute_performance-ego": 0.008894920349121094, "sim_compute_robot_state-ego": 0.01985337734222412}, "ETHZ_autolab_technical_track-2-0": {"driven_any": 1.0285461590131229, "sim_physics": 0.14701800732999237, "survival_time": 3.699999999999995, "driven_lanedir": 0.9241157643596049, "sim_render-ego": 0.015316344596244194, "in-drivable-lane": 0.1999999999999993, "agent_compute-ego": 0.026433686952333193, "deviation-heading": 0.7204674568635022, "set_robot_commands": 0.012785908338185903, "deviation-center-line": 0.12663250504026446, "driven_lanedir_consec": 0.9241157643596049, "sim_compute_sim_state": 0.005543505823290026, "sim_compute_performance-ego": 0.008820926820909654, "sim_compute_robot_state-ego": 0.011481536401284707}, "ETHZ_autolab_technical_track-3-0": {"driven_any": 0.7833781665871041, "sim_physics": 0.11730214416003618, "survival_time": 3.049999999999997, "driven_lanedir": 0.7738646695799877, "sim_render-ego": 0.014603689068653544, "in-drivable-lane": 0, "agent_compute-ego": 0.025621636969144223, "deviation-heading": 0.3999540773760856, "set_robot_commands": 0.010166754488085137, "deviation-center-line": 0.22820877647883453, "driven_lanedir_consec": 0.7738646695799877, "sim_compute_sim_state": 0.005312130099437276, "sim_compute_performance-ego": 0.008485446210767402, "sim_compute_robot_state-ego": 0.01053238696739322}, "ETHZ_autolab_technical_track-4-0": {"driven_any": 0.36430896900879234, "sim_physics": 0.12822967022657394, "survival_time": 1.6000000000000008, "driven_lanedir": 0.3193916254754976, "sim_render-ego": 0.01351168006658554, "in-drivable-lane": 0.05000000000000005, "agent_compute-ego": 0.025171220302581787, "deviation-heading": 0.618182945554115, "set_robot_commands": 0.009737350046634674, "deviation-center-line": 0.1026554262752819, "driven_lanedir_consec": 0.3193916254754976, "sim_compute_sim_state": 0.005484975874423981, "sim_compute_performance-ego": 0.008083820343017578, "sim_compute_robot_state-ego": 0.010315470397472382}}
set_robot_commands_max0.012785908338185903
set_robot_commands_mean0.010944139616430777
set_robot_commands_median0.010785424709320068
set_robot_commands_min0.009737350046634674
sim_compute_performance-ego_max0.008894920349121094
sim_compute_performance-ego_mean0.008501092599290449
sim_compute_performance-ego_median0.008485446210767402
sim_compute_performance-ego_min0.008083820343017578
sim_compute_robot_state-ego_max0.01985337734222412
sim_compute_robot_state-ego_mean0.0125208505918807
sim_compute_robot_state-ego_median0.01053238696739322
sim_compute_robot_state-ego_min0.010315470397472382
sim_compute_sim_state_max0.005673607734784688
sim_compute_sim_state_mean0.005377448989852771
sim_compute_sim_state_median0.005484975874423981
sim_compute_sim_state_min0.004873025417327881
sim_physics_max0.14701800732999237
sim_physics_mean0.13207026059140375
sim_physics_median0.13023406838717527
sim_physics_min0.11730214416003618
sim_render-ego_max0.015316344596244194
sim_render-ego_mean0.014424491631527153
sim_render-ego_median0.014603689068653544
sim_render-ego_min0.01351168006658554
simulation-passed1
survival_time_max3.699999999999995
survival_time_mean2.599999999999998
survival_time_min1.0000000000000002
No reset possible
27669step1-simulationsuccessyes0:04:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25677step1-simulationsuccessno0:05:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25401step1-simulationsuccessno0:04:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
25399step1-simulationsuccessno0:05:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
24962step1-simulationsuccessno0:04:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
24960step1-simulationsuccessno0:03:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
24374step1-simulationerrorno0:02:59
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission4187/step1-simulation-idsc-rudolf-8269-job24374:

File '/tmp/duckietown/DT18/evaluator/executions/aido3-LF-sim-validation/submission4187/step1-simulation-idsc-rudolf-8269-job24374/challenge-results/challenge_results.yaml' does not exist. 

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible