Duckietown Challenges Home Challenges Submissions

Submission 11712

Submission11712
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 53931
Next
User labelreal-exercise-2
Admin priority50
Blessingn/a
User priority50

53931

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
53931LFv-simsuccessyes0:31:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.98402162645874
survival_time_median59.99999999999873
deviation-center-line_median2.123693617530104
in-drivable-lane_median9.124999999999812


other stats
agent_compute-ego0_max0.01321668628848264
agent_compute-ego0_mean0.012667262909293053
agent_compute-ego0_median0.012613601485612445
agent_compute-ego0_min0.012225162377464682
complete-iteration_max0.27790141573139265
complete-iteration_mean0.249316398376282
complete-iteration_median0.2487169974650273
complete-iteration_min0.2219301828436808
deviation-center-line_max4.481717528869844
deviation-center-line_mean2.30426299094045
deviation-center-line_min0.48794719983174734
deviation-heading_max17.220459251091828
deviation-heading_mean9.414820913101218
deviation-heading_median9.318876245147182
deviation-heading_min1.8010719110186784
driven_any_max11.525145665271811
driven_any_mean8.317302662179618
driven_any_median10.292047505548474
driven_any_min1.159969972349714
driven_lanedir_consec_max9.0645802523921
driven_lanedir_consec_mean4.488908818657182
driven_lanedir_consec_min0.9230117693191476
driven_lanedir_max9.0645802523921
driven_lanedir_mean4.948884040429746
driven_lanedir_median4.903972070003866
driven_lanedir_min0.9230117693191476
get_duckie_state_max2.4240876514647623e-06
get_duckie_state_mean2.338407251821674e-06
get_duckie_state_median2.3638378273537517e-06
get_duckie_state_min2.20186570111443e-06
get_robot_state_max0.003928127932012528
get_robot_state_mean0.003839639114728795
get_robot_state_median0.0038636618113140576
get_robot_state_min0.003703104904274535
get_state_dump_max0.004984956101315107
get_state_dump_mean0.00485388747002622
get_state_dump_median0.004877358848705181
get_state_dump_min0.004675876081379411
get_ui_image_max0.03548653576395891
get_ui_image_mean0.03214197233957893
get_ui_image_median0.0328172088901344
get_ui_image_min0.027446935814088032
in-drivable-lane_max52.84999999999873
in-drivable-lane_mean17.999999999999588
in-drivable-lane_min0.8999999999999968
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.525145665271811, "get_ui_image": 0.03069198717185599, "step_physics": 0.1514530239454614, "survival_time": 59.99999999999873, "driven_lanedir": 8.58767173612214, "get_state_dump": 0.004932593743469594, "get_robot_state": 0.003928127932012528, "sim_render-ego0": 0.00392330754905021, "get_duckie_state": 2.3563934504042856e-06, "in-drivable-lane": 10.54999999999972, "deviation-heading": 16.400877107273256, "agent_compute-ego0": 0.012783448563130273, "complete-iteration": 0.2219301828436808, "set_robot_commands": 0.002366706989488435, "deviation-center-line": 3.62061577373684, "driven_lanedir_consec": 6.747770849031886, "sim_compute_sim_state": 0.009634338151803125, "sim_compute_performance-ego0": 0.0021241807024445164}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.159969972349714, "get_ui_image": 0.034942430608412796, "step_physics": 0.2037080752304177, "survival_time": 7.599999999999981, "driven_lanedir": 0.9230117693191476, "get_state_dump": 0.004675876081379411, "get_robot_state": 0.003703104904274535, "sim_render-ego0": 0.0037235886442895023, "get_duckie_state": 2.20186570111443e-06, "in-drivable-lane": 0.8999999999999968, "deviation-heading": 2.236875383021107, "agent_compute-ego0": 0.012443754408094618, "complete-iteration": 0.27790141573139265, "set_robot_commands": 0.0021548629586213555, "deviation-center-line": 0.48794719983174734, "driven_lanedir_consec": 0.9230117693191476, "sim_compute_sim_state": 0.010407882578232708, "sim_compute_performance-ego0": 0.0020539963167477276}, "LF-norm-techtrack-000-ego0": {"driven_any": 11.224154473815425, "get_ui_image": 0.03548653576395891, "step_physics": 0.1834162756564913, "survival_time": 59.99999999999873, "driven_lanedir": 9.0645802523921, "get_state_dump": 0.004984956101315107, "get_robot_state": 0.003886510688597514, "sim_render-ego0": 0.003977556609789795, "get_duckie_state": 2.4240876514647623e-06, "in-drivable-lane": 7.699999999999908, "deviation-heading": 17.220459251091828, "agent_compute-ego0": 0.01321668628848264, "complete-iteration": 0.26326220736316996, "set_robot_commands": 0.002383537038379069, "deviation-center-line": 4.481717528869844, "driven_lanedir_consec": 9.0645802523921, "sim_compute_sim_state": 0.013651268170537004, "sim_compute_performance-ego0": 0.0021616292932845472}, "LF-norm-small_loop-000-ego0": {"driven_any": 9.359940537281524, "get_ui_image": 0.027446935814088032, "step_physics": 0.1709438031360966, "survival_time": 59.99999999999873, "driven_lanedir": 1.2202724038855937, "get_state_dump": 0.004822123953940767, "get_robot_state": 0.003840812934030601, "sim_render-ego0": 0.003911929563320646, "get_duckie_state": 2.3712822043032178e-06, "in-drivable-lane": 52.84999999999873, "deviation-heading": 1.8010719110186784, "agent_compute-ego0": 0.012225162377464682, "complete-iteration": 0.23417178756688456, "set_robot_commands": 0.002330248203007605, "deviation-center-line": 0.626771461323368, "driven_lanedir_consec": 1.2202724038855937, "sim_compute_sim_state": 0.0064673630224477245, "sim_compute_performance-ego0": 0.00209418998769082}}
set_robot_commands_max0.002383537038379069
set_robot_commands_mean0.002308838797374116
set_robot_commands_median0.00234847759624802
set_robot_commands_min0.0021548629586213555
sim_compute_performance-ego0_max0.0021616292932845472
sim_compute_performance-ego0_mean0.0021084990750419026
sim_compute_performance-ego0_median0.0021091853450676684
sim_compute_performance-ego0_min0.0020539963167477276
sim_compute_sim_state_max0.013651268170537004
sim_compute_sim_state_mean0.01004021298075514
sim_compute_sim_state_median0.010021110365017917
sim_compute_sim_state_min0.0064673630224477245
sim_render-ego0_max0.003977556609789795
sim_render-ego0_mean0.003884095591612538
sim_render-ego0_median0.003917618556185428
sim_render-ego0_min0.0037235886442895023
simulation-passed1
step_physics_max0.2037080752304177
step_physics_mean0.17738029449211673
step_physics_median0.17718003939629395
step_physics_min0.1514530239454614
survival_time_max59.99999999999873
survival_time_mean46.89999999999904
survival_time_min7.599999999999981
No reset possible
48848LFv-simerrorno0:09:56
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140538853659840
- M:video_aido:cmdline(in:/;out:/) 140538853519472
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43674LFv-simsuccessno0:06:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43673LFv-simsuccessno0:08:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43672LFv-simsuccessno0:08:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible