Duckietown Challenges Home Challenges Submissions

Submission 6851

Submission6851
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58514
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58514

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58514LFv-simsuccessyes0:07:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.9324213415597488
survival_time_median9.5
deviation-center-line_median0.487535361188254
in-drivable-lane_median5.675000000000005


other stats
agent_compute-ego0_max0.013905436028051011
agent_compute-ego0_mean0.013062365213044796
agent_compute-ego0_median0.01287655886070731
agent_compute-ego0_min0.012590907102713557
complete-iteration_max0.21493110045088523
complete-iteration_mean0.1879817648753586
complete-iteration_median0.18879019532980576
complete-iteration_min0.1594155683909377
deviation-center-line_max0.8553287516160403
deviation-center-line_mean0.5082573773228142
deviation-center-line_min0.2026300352987085
deviation-heading_max5.130479615893198
deviation-heading_mean1.9344651590663051
deviation-heading_median1.0255217551354423
deviation-heading_min0.5563375101011377
driven_any_max3.6498195603576455
driven_any_mean2.03564541903372
driven_any_median1.7106702165631775
driven_any_min1.071421682650878
driven_lanedir_consec_max1.3848377930371043
driven_lanedir_consec_mean0.931983963372651
driven_lanedir_consec_min0.4782553773340015
driven_lanedir_max1.59692229215755
driven_lanedir_mean0.9850050881527624
driven_lanedir_median0.9324213415597488
driven_lanedir_min0.4782553773340015
get_duckie_state_max1.6572285283562597e-06
get_duckie_state_mean1.447781320160741e-06
get_duckie_state_median1.419734470774042e-06
get_duckie_state_min1.29442781073862e-06
get_robot_state_max0.003911185810584149
get_robot_state_mean0.0037591291109355662
get_robot_state_median0.003721478570812577
get_robot_state_min0.0036823734915329633
get_state_dump_max0.005162957255825675
get_state_dump_mean0.00489901328836042
get_state_dump_median0.0049222350754205575
get_state_dump_min0.004588625746774891
get_ui_image_max0.03712986401862499
get_ui_image_mean0.031436826018930726
get_ui_image_median0.03085739854425757
get_ui_image_min0.02690264296858278
in-drivable-lane_max9.400000000000066
in-drivable-lane_mean5.575000000000018
in-drivable-lane_min1.5499999999999954
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 1.480077020648379, "get_ui_image": 0.028474433290446462, "step_physics": 0.10358344411557438, "survival_time": 8.09999999999998, "driven_lanedir": 1.2558796560684389, "get_state_dump": 0.005162957255825675, "get_robot_state": 0.0036823734915329633, "sim_render-ego0": 0.003795331241163008, "get_duckie_state": 1.6572285283562597e-06, "in-drivable-lane": 1.5499999999999954, "deviation-heading": 1.4238999874652858, "agent_compute-ego0": 0.012590907102713557, "complete-iteration": 0.17038747284309996, "set_robot_commands": 0.0022039823005535852, "deviation-center-line": 0.7154233014559347, "driven_lanedir_consec": 1.2558796560684389, "sim_compute_sim_state": 0.008757149514976454, "sim_compute_performance-ego0": 0.0020421019361063016}, "LF-norm-zigzag-000-ego0": {"driven_any": 3.6498195603576455, "get_ui_image": 0.03712986401862499, "step_physics": 0.13494292478910916, "survival_time": 19.050000000000136, "driven_lanedir": 1.59692229215755, "get_state_dump": 0.00472329549140331, "get_robot_state": 0.00369926397713067, "sim_render-ego0": 0.003864073004398046, "get_duckie_state": 1.3106780526525688e-06, "in-drivable-lane": 9.400000000000066, "deviation-heading": 5.130479615893198, "agent_compute-ego0": 0.013110464155986047, "complete-iteration": 0.21493110045088523, "set_robot_commands": 0.002234040754627807, "deviation-center-line": 0.8553287516160403, "driven_lanedir_consec": 1.3848377930371043, "sim_compute_sim_state": 0.013044216870013331, "sim_compute_performance-ego0": 0.002091359717683642}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.071421682650878, "get_ui_image": 0.03324036379806868, "step_physics": 0.13288927624243815, "survival_time": 6.499999999999985, "driven_lanedir": 0.4782553773340015, "get_state_dump": 0.005121174659437806, "get_robot_state": 0.003911185810584149, "sim_render-ego0": 0.004283124253950046, "get_duckie_state": 1.5287908888955153e-06, "in-drivable-lane": 3.7499999999999862, "deviation-heading": 0.627143522805599, "agent_compute-ego0": 0.013905436028051011, "complete-iteration": 0.20719291781651156, "set_robot_commands": 0.0024423453644031785, "deviation-center-line": 0.2026300352987085, "driven_lanedir_consec": 0.4782553773340015, "sim_compute_sim_state": 0.009055760070567822, "sim_compute_performance-ego0": 0.002242528755246228}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.941263412477976, "get_ui_image": 0.02690264296858278, "step_physics": 0.09834866763249923, "survival_time": 10.90000000000002, "driven_lanedir": 0.6089630270510591, "get_state_dump": 0.004588625746774891, "get_robot_state": 0.003743693164494484, "sim_render-ego0": 0.00375579154654725, "get_duckie_state": 1.29442781073862e-06, "in-drivable-lane": 7.600000000000024, "deviation-heading": 0.5563375101011377, "agent_compute-ego0": 0.012642653565428572, "complete-iteration": 0.1594155683909377, "set_robot_commands": 0.0022174473766866883, "deviation-center-line": 0.2596474209205733, "driven_lanedir_consec": 0.6089630270510591, "sim_compute_sim_state": 0.005143482391148397, "sim_compute_performance-ego0": 0.001987592270385185}}
set_robot_commands_max0.0024423453644031785
set_robot_commands_mean0.002274453949067815
set_robot_commands_median0.002225744065657248
set_robot_commands_min0.0022039823005535852
sim_compute_performance-ego0_max0.002242528755246228
sim_compute_performance-ego0_mean0.0020908956698553393
sim_compute_performance-ego0_median0.002066730826894972
sim_compute_performance-ego0_min0.001987592270385185
sim_compute_sim_state_max0.013044216870013331
sim_compute_sim_state_mean0.0090001522116765
sim_compute_sim_state_median0.008906454792772137
sim_compute_sim_state_min0.005143482391148397
sim_render-ego0_max0.004283124253950046
sim_render-ego0_mean0.0039245800115145875
sim_render-ego0_median0.003829702122780527
sim_render-ego0_min0.00375579154654725
simulation-passed1
step_physics_max0.13494292478910916
step_physics_mean0.11744107819490524
step_physics_median0.11823636017900628
step_physics_min0.09834866763249923
survival_time_max19.050000000000136
survival_time_mean11.137500000000031
survival_time_min6.499999999999985
No reset possible
58513LFv-simsuccessyes0:07:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58510LFv-simsuccessyes0:07:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52426LFv-simerrorno0:02:36
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139868015587392
- M:video_aido:cmdline(in:/;out:/) 139868015588160
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41789LFv-simsuccessno0:08:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38337LFv-simsuccessno0:14:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38336LFv-simsuccessno0:13:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36438LFv-simsuccessno0:09:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36436LFv-simhost-errorno0:00:17
Error while running [...]
Error while running Docker Compose:

Could not run command
│    cmd: [docker-compose, -p, reg05-7d9c5d9f1ec5-1-job36436-578473, pull]
│ stdout: ''
│  sderr: ''
│      e: Command '['docker-compose', '-p', 'reg05-7d9c5d9f1ec5-1-job36436-578473', 'pull']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35854LFv-simsuccessno0:01:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35438LFv-simerrorno0:23:05
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6851/LFv-sim-reg01-94a6fab21ac9-1-job35438/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35121LFv-simsuccessno0:23:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33463LFv-simsuccessno0:25:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33449LFv-simsuccessno0:15:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible