Duckietown Challenges Home Challenges Submissions

Submission 9236

Submission9236
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58506
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58506

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58506LFv-simsuccessyes0:12:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.3557467930936014
survival_time_median11.900000000000034
deviation-center-line_median0.358087246756716
in-drivable-lane_median4.925000000000031


other stats
agent_compute-ego0_max0.013396600148821837
agent_compute-ego0_mean0.01296358060478726
agent_compute-ego0_median0.013051328396420037
agent_compute-ego0_min0.012355065477487132
complete-iteration_max0.24525123272302016
complete-iteration_mean0.1868881810274744
complete-iteration_median0.1743301765496692
complete-iteration_min0.15364113828753898
deviation-center-line_max1.0861910579111551
deviation-center-line_mean0.4718073970555271
deviation-center-line_min0.08486403679752132
deviation-heading_max3.216457426030052
deviation-heading_mean1.5339395553258703
deviation-heading_median1.1734552236037097
deviation-heading_min0.5723903480660093
driven_any_max12.255622758670292
driven_any_mean4.284026146127758
driven_any_median2.2767176183213373
driven_any_min0.3270465891980632
driven_lanedir_consec_max3.2101988113516176
driven_lanedir_consec_mean1.523064414279408
driven_lanedir_consec_min0.17056525957881163
driven_lanedir_max3.2101988113516176
driven_lanedir_mean1.523064414279408
driven_lanedir_median1.3557467930936014
driven_lanedir_min0.17056525957881163
get_duckie_state_max1.5847234917967112e-06
get_duckie_state_mean1.4062680353098008e-06
get_duckie_state_median1.385928215122338e-06
get_duckie_state_min1.268492219197816e-06
get_robot_state_max0.003821592602953815
get_robot_state_mean0.003729080841055708
get_robot_state_median0.0037194957545945502
get_robot_state_min0.003655739252079916
get_state_dump_max0.005174559234772753
get_state_dump_mean0.004876082714968617
get_state_dump_median0.004815974725082871
get_state_dump_min0.004697822174935971
get_ui_image_max0.03628009220339217
get_ui_image_mean0.031021839296525015
get_ui_image_median0.03062872244971347
get_ui_image_min0.026549820083280953
in-drivable-lane_max43.19999999999938
in-drivable-lane_mean13.57499999999986
in-drivable-lane_min1.2499999999999982
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.8846816030336377, "get_ui_image": 0.029223420403220436, "step_physics": 0.09831547817397196, "survival_time": 14.800000000000075, "driven_lanedir": 2.0416353056683323, "get_state_dump": 0.00491176871739654, "get_robot_state": 0.003726364386202109, "sim_render-ego0": 0.00391993618974782, "get_duckie_state": 1.431314230767966e-06, "in-drivable-lane": 4.500000000000064, "deviation-heading": 1.5563851776150908, "agent_compute-ego0": 0.01293753373502481, "complete-iteration": 0.167893962025241, "set_robot_commands": 0.002403015239471538, "deviation-center-line": 0.5549950859815379, "driven_lanedir_consec": 2.0416353056683323, "sim_compute_sim_state": 0.010283693320020682, "sim_compute_performance-ego0": 0.0020788028986767084}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.3270465891980632, "get_ui_image": 0.03628009220339217, "step_physics": 0.16962254272316987, "survival_time": 2.5999999999999988, "driven_lanedir": 0.17056525957881163, "get_state_dump": 0.004697822174935971, "get_robot_state": 0.0037126271229869913, "sim_render-ego0": 0.004001127099091152, "get_duckie_state": 1.34054219947671e-06, "in-drivable-lane": 1.2499999999999982, "deviation-heading": 0.7905252695923287, "agent_compute-ego0": 0.013165123057815264, "complete-iteration": 0.24525123272302016, "set_robot_commands": 0.0022809505462646484, "deviation-center-line": 0.08486403679752132, "driven_lanedir_consec": 0.17056525957881163, "sim_compute_sim_state": 0.009392382963648385, "sim_compute_performance-ego0": 0.002010381446694428}, "LF-norm-techtrack-000-ego0": {"driven_any": 12.255622758670292, "get_ui_image": 0.0320340244962065, "step_physics": 0.10649610285790977, "survival_time": 59.549999999998754, "driven_lanedir": 3.2101988113516176, "get_state_dump": 0.005174559234772753, "get_robot_state": 0.003821592602953815, "sim_render-ego0": 0.004084688905101494, "get_duckie_state": 1.5847234917967112e-06, "in-drivable-lane": 43.19999999999938, "deviation-heading": 3.216457426030052, "agent_compute-ego0": 0.013396600148821837, "complete-iteration": 0.1807663910740974, "set_robot_commands": 0.0022665404233356452, "deviation-center-line": 1.0861910579111551, "driven_lanedir_consec": 3.2101988113516176, "sim_compute_sim_state": 0.011257768277353888, "sim_compute_performance-ego0": 0.0021342503144437036}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.668753633609037, "get_ui_image": 0.026549820083280953, "step_physics": 0.09317561144328249, "survival_time": 8.999999999999993, "driven_lanedir": 0.6698582805188702, "get_state_dump": 0.004720180732769202, "get_robot_state": 0.003655739252079916, "sim_render-ego0": 0.0037524423546553974, "get_duckie_state": 1.268492219197816e-06, "in-drivable-lane": 5.349999999999998, "deviation-heading": 0.5723903480660093, "agent_compute-ego0": 0.012355065477487132, "complete-iteration": 0.15364113828753898, "set_robot_commands": 0.0022539868539209525, "deviation-center-line": 0.16117940753189405, "driven_lanedir_consec": 0.6698582805188702, "sim_compute_sim_state": 0.005135584931347251, "sim_compute_performance-ego0": 0.0019565590178768936}}
set_robot_commands_max0.002403015239471538
set_robot_commands_mean0.002301123265748196
set_robot_commands_median0.002273745484800147
set_robot_commands_min0.0022539868539209525
sim_compute_performance-ego0_max0.0021342503144437036
sim_compute_performance-ego0_mean0.002044998419422933
sim_compute_performance-ego0_median0.002044592172685568
sim_compute_performance-ego0_min0.0019565590178768936
sim_compute_sim_state_max0.011257768277353888
sim_compute_sim_state_mean0.00901735737309255
sim_compute_sim_state_median0.009838038141834532
sim_compute_sim_state_min0.005135584931347251
sim_render-ego0_max0.004084688905101494
sim_render-ego0_mean0.003939548637148966
sim_render-ego0_median0.003960531644419486
sim_render-ego0_min0.0037524423546553974
simulation-passed1
step_physics_max0.16962254272316987
step_physics_mean0.11690243379958352
step_physics_median0.10240579051594088
step_physics_min0.09317561144328249
survival_time_max59.549999999998754
survival_time_mean21.487499999999702
survival_time_min2.5999999999999988
No reset possible
58505LFv-simsuccessyes0:06:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58504LFv-simsuccessyes0:05:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58503LFv-simsuccessyes0:06:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58502LFv-simsuccessyes0:07:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58501LFv-simsuccessyes0:07:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58500LFv-simsuccessyes0:05:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52439LFv-simerrorno0:01:59
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139867728746336
- M:video_aido:cmdline(in:/;out:/) 139867728746384
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52429LFv-simerrorno0:02:21
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139750476404912
- M:video_aido:cmdline(in:/;out:/) 139750475576800
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41781LFv-simsuccessno0:07:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38329LFv-simsuccessno0:09:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36419LFv-simerrorno0:00:44
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-Sandy1-sandy-1-job36419-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36418LFv-simsuccessno0:10:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36416LFv-simerrorno0:00:48
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-Sandy1-sandy-1-job36416-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35836LFv-simsuccessno0:00:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35425LFv-simerrorno0:22:15
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9236/LFv-sim-reg02-1b92df2e7e91-1-job35425/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35070LFv-simsuccessno0:23:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34481LFv-simsuccessno0:27:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34480LFv-simsuccessno0:24:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible