Duckietown Challenges Home Challenges Submissions

Submission 9265

Submission9265
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58456
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58456

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58456LFv-simsuccessyes0:35:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median10.289938512943106
survival_time_median59.99999999999873
deviation-center-line_median3.4381078201182853
in-drivable-lane_median1.049999999999979


other stats
agent_compute-ego0_max0.02981035119786449
agent_compute-ego0_mean0.01691061159057681
agent_compute-ego0_median0.012682523755209334
agent_compute-ego0_min0.012467047654024071
complete-iteration_max0.1968025789967584
complete-iteration_mean0.17726011096587485
complete-iteration_median0.17539950264780646
complete-iteration_min0.16143885957112816
deviation-center-line_max3.9853104579313343
deviation-center-line_mean3.5474181478124565
deviation-center-line_min3.328146493081919
deviation-heading_max13.517635960013008
deviation-heading_mean10.92214356989012
deviation-heading_median10.821946842351892
deviation-heading_min8.527044634843683
driven_any_max12.562585307754413
driven_any_mean11.26846739018466
driven_any_median11.122056674421833
driven_any_min10.26717090414057
driven_lanedir_consec_max12.3280159623981
driven_lanedir_consec_mean10.433742033356651
driven_lanedir_consec_min8.827075145142299
driven_lanedir_max12.3280159623981
driven_lanedir_mean10.954257892904252
driven_lanedir_median10.846484543824628
driven_lanedir_min9.79604652156965
get_duckie_state_max2.090381047410036e-06
get_duckie_state_mean2.019808353929099e-06
get_duckie_state_median2.029138639705763e-06
get_duckie_state_min1.9305750888948336e-06
get_robot_state_max0.0037713134219306992
get_robot_state_mean0.0037034155427168847
get_robot_state_median0.0037608452383227193
get_robot_state_min0.003520658272291401
get_state_dump_max0.004690170486602656
get_state_dump_mean0.004607880840095056
get_state_dump_median0.00463066529870331
get_state_dump_min0.004480022276370948
get_ui_image_max0.034637404321929396
get_ui_image_mean0.03001897618931398
get_ui_image_median0.030216397294196163
get_ui_image_min0.025005705846934196
in-drivable-lane_max2.0999999999999828
in-drivable-lane_mean1.0499999999999852
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 12.562585307754413, "get_ui_image": 0.028306656931957337, "step_physics": 0.09480194842984138, "survival_time": 59.99999999999873, "driven_lanedir": 12.3280159623981, "get_state_dump": 0.004690170486602656, "get_robot_state": 0.003752457708442936, "sim_render-ego0": 0.003811707206808657, "get_duckie_state": 2.090381047410036e-06, "in-drivable-lane": 0.649999999999963, "deviation-heading": 8.527044634843683, "agent_compute-ego0": 0.012467047654024071, "complete-iteration": 0.16143885957112816, "set_robot_commands": 0.002283304557514429, "deviation-center-line": 3.378561448262384, "driven_lanedir_consec": 12.3280159623981, "sim_compute_sim_state": 0.009175833615534113, "sim_compute_performance-ego0": 0.0020659887026390567}, "LF-norm-zigzag-000-ego0": {"driven_any": 10.26717090414057, "get_ui_image": 0.034637404321929396, "step_physics": 0.1182146812855056, "survival_time": 59.99999999999873, "driven_lanedir": 9.79604652156965, "get_state_dump": 0.004616216259336193, "get_robot_state": 0.003769232768202503, "sim_render-ego0": 0.0038681609942255966, "get_duckie_state": 2.0631842569546536e-06, "in-drivable-lane": 2.0999999999999828, "deviation-heading": 13.517635960013008, "agent_compute-ego0": 0.012844363616765489, "complete-iteration": 0.1968025789967584, "set_robot_commands": 0.002282444781605945, "deviation-center-line": 3.9853104579313343, "driven_lanedir_consec": 8.827075145142299, "sim_compute_sim_state": 0.014384213137090651, "sim_compute_performance-ego0": 0.002099569195216145}, "LF-norm-techtrack-000-ego0": {"driven_any": 11.47303485278716, "get_ui_image": 0.03212613765643499, "step_physics": 0.11014910363634856, "survival_time": 59.99999999999873, "driven_lanedir": 11.116464195566971, "get_state_dump": 0.004645114338070427, "get_robot_state": 0.0037713134219306992, "sim_render-ego0": 0.003857370022433088, "get_duckie_state": 1.995093022456872e-06, "in-drivable-lane": 1.4499999999999948, "deviation-heading": 10.798312965353915, "agent_compute-ego0": 0.012520683893653175, "complete-iteration": 0.18414030047280905, "set_robot_commands": 0.002298738041289343, "deviation-center-line": 3.328146493081919, "driven_lanedir_consec": 10.003372133803929, "sim_compute_sim_state": 0.012587922697361066, "sim_compute_performance-ego0": 0.00210030648630922}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.771078496056504, "get_ui_image": 0.025005705846934196, "step_physics": 0.09012022205038332, "survival_time": 59.99999999999873, "driven_lanedir": 10.576504892082284, "get_state_dump": 0.004480022276370948, "get_robot_state": 0.003520658272291401, "sim_render-ego0": 0.003657476788853527, "get_duckie_state": 1.9305750888948336e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.845580719349869, "agent_compute-ego0": 0.02981035119786449, "complete-iteration": 0.16665870482280393, "set_robot_commands": 0.002141977924788425, "deviation-center-line": 3.4976541919741866, "driven_lanedir_consec": 10.576504892082284, "sim_compute_sim_state": 0.00594100308954269, "sim_compute_performance-ego0": 0.0019013409213559215}}
set_robot_commands_max0.002298738041289343
set_robot_commands_mean0.0022516163262995353
set_robot_commands_median0.002282874669560187
set_robot_commands_min0.002141977924788425
sim_compute_performance-ego0_max0.00210030648630922
sim_compute_performance-ego0_mean0.002041801326380086
sim_compute_performance-ego0_median0.002082778948927601
sim_compute_performance-ego0_min0.0019013409213559215
sim_compute_sim_state_max0.014384213137090651
sim_compute_sim_state_mean0.01052224313488213
sim_compute_sim_state_median0.010881878156447589
sim_compute_sim_state_min0.00594100308954269
sim_render-ego0_max0.0038681609942255966
sim_render-ego0_mean0.0037986787530802178
sim_render-ego0_median0.0038345386146208726
sim_render-ego0_min0.003657476788853527
simulation-passed1
step_physics_max0.1182146812855056
step_physics_mean0.10332148885051971
step_physics_median0.10247552603309498
step_physics_min0.09012022205038332
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
58455LFv-simsuccessyes0:29:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58454LFv-simsuccessyes0:30:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52374LFv-simerrorno0:11:08
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140616511096816
- M:video_aido:cmdline(in:/;out:/) 140616511097344
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41767LFv-simsuccessno0:09:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38275LFv-simsuccessno0:11:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36380LFv-simsuccessno0:09:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35811LFv-simsuccessno0:01:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35403LFv-simerrorno0:22:36
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9265/LFv-sim-reg01-94a6fab21ac9-1-job35403/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35043LFv-simsuccessno0:23:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34564LFv-simsuccessno0:25:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34563LFv-simsuccessno0:23:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible