Duckietown Challenges Home Challenges Submissions

Submission 9242

Submission9242
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58467
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58467

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58467LFv-simsuccessyes0:23:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.888234996477122
survival_time_median48.1249999999994
deviation-center-line_median2.6886316932681806
in-drivable-lane_median8.699999999999838


other stats
agent_compute-ego0_max0.012563939586348598
agent_compute-ego0_mean0.01232624382690778
agent_compute-ego0_median0.01230977226564149
agent_compute-ego0_min0.012121491189999544
complete-iteration_max0.1965221717218647
complete-iteration_mean0.1738148515570267
complete-iteration_median0.17340498380952846
complete-iteration_min0.1519272668871852
deviation-center-line_max3.9719556029252767
deviation-center-line_mean2.409625978040707
deviation-center-line_min0.2892849227011912
deviation-heading_max12.794208647009656
deviation-heading_mean7.506639714108826
deviation-heading_median7.731597069145524
deviation-heading_min1.769156071134603
driven_any_max8.33813283590563
driven_any_mean5.749068728246053
driven_any_median6.626074881019299
driven_any_min1.405992315039984
driven_lanedir_consec_max6.900567019774538
driven_lanedir_consec_mean4.3392661376193065
driven_lanedir_consec_min0.6800275377484422
driven_lanedir_max6.900567019774538
driven_lanedir_mean4.339393276324547
driven_lanedir_median4.888234996477122
driven_lanedir_min0.6805360925694022
get_duckie_state_max2.1644860259757555e-06
get_duckie_state_mean2.11733935644194e-06
get_duckie_state_median2.1324567994598423e-06
get_duckie_state_min2.03995780087232e-06
get_robot_state_max0.0037422531550373264
get_robot_state_mean0.0036576744522852215
get_robot_state_median0.003638782366193726
get_robot_state_min0.003610879921716106
get_state_dump_max0.004789837807283273
get_state_dump_mean0.0046484308265854575
get_state_dump_median0.004613128658263729
get_state_dump_min0.004577628182531098
get_ui_image_max0.03457198870021666
get_ui_image_mean0.02983508980400543
get_ui_image_median0.029557032445094052
get_ui_image_min0.02565430562561696
in-drivable-lane_max13.89999999999963
in-drivable-lane_mean9.199999999999836
in-drivable-lane_min5.500000000000035
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 4.921008821408285, "get_ui_image": 0.027405907956693457, "step_physics": 0.09699032326375158, "survival_time": 36.25000000000008, "driven_lanedir": 3.6250433712827066, "get_state_dump": 0.004614398499165685, "get_robot_state": 0.003610879921716106, "sim_render-ego0": 0.003756736592484571, "get_duckie_state": 2.1644860259757555e-06, "in-drivable-lane": 9.499999999999831, "deviation-heading": 3.697055184936701, "agent_compute-ego0": 0.012220600419793249, "complete-iteration": 0.1630732816112928, "set_robot_commands": 0.002175546546284489, "deviation-center-line": 1.5746738803926998, "driven_lanedir_consec": 3.6250433712827066, "sim_compute_sim_state": 0.010227843421221437, "sim_compute_performance-ego0": 0.0019866490823835055}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.405992315039984, "get_ui_image": 0.03457198870021666, "step_physics": 0.12331353067817176, "survival_time": 11.100000000000025, "driven_lanedir": 0.6805360925694022, "get_state_dump": 0.004789837807283273, "get_robot_state": 0.0036613524227398927, "sim_render-ego0": 0.0037950111611541617, "get_duckie_state": 2.1233152380973233e-06, "in-drivable-lane": 5.500000000000035, "deviation-heading": 1.769156071134603, "agent_compute-ego0": 0.012563939586348598, "complete-iteration": 0.1965221717218647, "set_robot_commands": 0.0022440929583904456, "deviation-center-line": 0.2892849227011912, "driven_lanedir_consec": 0.6800275377484422, "sim_compute_sim_state": 0.009551276005971592, "sim_compute_performance-ego0": 0.001948087204732168}, "LF-norm-techtrack-000-ego0": {"driven_any": 8.331140940630313, "get_ui_image": 0.03170815693349465, "step_physics": 0.11011428181872976, "survival_time": 59.99999999999873, "driven_lanedir": 6.900567019774538, "get_state_dump": 0.004611858817361773, "get_robot_state": 0.0037422531550373264, "sim_render-ego0": 0.003805977021724755, "get_duckie_state": 2.03995780087232e-06, "in-drivable-lane": 7.899999999999841, "deviation-heading": 12.794208647009656, "agent_compute-ego0": 0.012398944111489733, "complete-iteration": 0.18373668600776413, "set_robot_commands": 0.0023092250839855948, "deviation-center-line": 3.802589506143662, "driven_lanedir_consec": 6.900567019774538, "sim_compute_sim_state": 0.012831791155939793, "sim_compute_performance-ego0": 0.0021286657906690308}, "LF-norm-small_loop-000-ego0": {"driven_any": 8.33813283590563, "get_ui_image": 0.02565430562561696, "step_physics": 0.0919392744170736, "survival_time": 59.99999999999873, "driven_lanedir": 6.151426621671538, "get_state_dump": 0.004577628182531098, "get_robot_state": 0.003616212309647559, "sim_render-ego0": 0.003709886592194798, "get_duckie_state": 2.1415983608223618e-06, "in-drivable-lane": 13.89999999999963, "deviation-heading": 11.766138953354346, "agent_compute-ego0": 0.012121491189999544, "complete-iteration": 0.1519272668871852, "set_robot_commands": 0.0021936833908119168, "deviation-center-line": 3.9719556029252767, "driven_lanedir_consec": 6.151426621671538, "sim_compute_sim_state": 0.006081931100697641, "sim_compute_performance-ego0": 0.0019475818176650683}}
set_robot_commands_max0.0023092250839855948
set_robot_commands_mean0.002230636994868112
set_robot_commands_median0.002218888174601181
set_robot_commands_min0.002175546546284489
sim_compute_performance-ego0_max0.0021286657906690308
sim_compute_performance-ego0_mean0.002002745973862443
sim_compute_performance-ego0_median0.0019673681435578368
sim_compute_performance-ego0_min0.0019475818176650683
sim_compute_sim_state_max0.012831791155939793
sim_compute_sim_state_mean0.009673210420957618
sim_compute_sim_state_median0.009889559713596514
sim_compute_sim_state_min0.006081931100697641
sim_render-ego0_max0.003805977021724755
sim_render-ego0_mean0.003766902841889571
sim_render-ego0_median0.003775873876819366
sim_render-ego0_min0.003709886592194798
simulation-passed1
step_physics_max0.12331353067817176
step_physics_mean0.10558935254443168
step_physics_median0.10355230254124068
step_physics_min0.0919392744170736
survival_time_max59.99999999999873
survival_time_mean41.83749999999939
survival_time_min11.100000000000025
No reset possible
52424LFv-simerrorno0:09:04
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140497870064224
- M:video_aido:cmdline(in:/;out:/) 140498595491504
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52407LFv-simerrorno0:08:29
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139825625697056
- M:video_aido:cmdline(in:/;out:/) 139825625335216
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41775LFv-simsuccessno0:09:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41774LFv-simsuccessno0:09:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38313LFv-simsuccessno0:08:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38311LFv-simsuccessno0:09:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36409LFv-simerrorno0:00:45
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-Sandy2-sandy-1-job36409-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36407LFv-simerrorno0:00:46
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-Sandy1-sandy-1-job36407-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35831LFv-simsuccessno0:01:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35419LFv-simerrorno0:23:10
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg01-94a6fab21ac9-1-job35419/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35418LFv-simerrorno0:22:31
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9242/LFv-sim-reg03-0c28c9d61367-1-job35418/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35064LFv-simsuccessno0:23:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35063LFv-simsuccessno0:23:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34524LFv-simsuccessno0:25:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34523LFv-simsuccessno0:24:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible