Duckietown Challenges Home Challenges Submissions

Submission 11068

Submission11068
Competingyes
Challengeaido5-LF-sim-validation
UserÉtienne Boucher 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 56880
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

56880

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
56880LFv-simsuccessyes0:06:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6686410004768379
survival_time_median5.774999999999988
deviation-center-line_median0.13170054806002218
in-drivable-lane_median2.7749999999999915


other stats
agent_compute-ego0_max0.013889285496303014
agent_compute-ego0_mean0.012690456899388242
agent_compute-ego0_median0.012442727857880722
agent_compute-ego0_min0.01198708638548851
complete-iteration_max0.20154266130356563
complete-iteration_mean0.17898527319754637
complete-iteration_median0.18287329752332632
complete-iteration_min0.14865183643996716
deviation-center-line_max0.5208601028005717
deviation-center-line_mean0.21931563450431088
deviation-center-line_min0.0930013390966274
deviation-heading_max1.3772308129971225
deviation-heading_mean0.7875702992885585
deviation-heading_median0.6973321406794115
deviation-heading_min0.37838610279828855
driven_any_max2.4842546828207253
driven_any_mean1.7592469706031826
driven_any_median1.6456924558182862
driven_any_min1.2613482879554334
driven_lanedir_consec_max2.029530283703435
driven_lanedir_consec_mean0.9337523468445322
driven_lanedir_consec_min0.368197102721018
driven_lanedir_max2.029530283703435
driven_lanedir_mean0.9337523468445322
driven_lanedir_median0.6686410004768379
driven_lanedir_min0.368197102721018
get_duckie_state_max1.784733363560268e-06
get_duckie_state_mean1.6996480742163844e-06
get_duckie_state_median1.7067054059447311e-06
get_duckie_state_min1.6004481214158078e-06
get_robot_state_max0.004316604705083938
get_robot_state_mean0.003846268744391136
get_robot_state_median0.0037174064010675545
get_robot_state_min0.003633657470345497
get_state_dump_max0.005403109959193638
get_state_dump_mean0.004988440970736741
get_state_dump_median0.004910215588511738
get_state_dump_min0.004730222746729851
get_ui_image_max0.03342911284020607
get_ui_image_mean0.02985358367581466
get_ui_image_median0.030332145728300675
get_ui_image_min0.025320930406451225
in-drivable-lane_max3.999999999999986
in-drivable-lane_mean2.862499999999991
in-drivable-lane_min1.899999999999995
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.4842546828207253, "get_ui_image": 0.027458864014323164, "step_physics": 0.10595222071903508, "survival_time": 8.14999999999998, "driven_lanedir": 2.029530283703435, "get_state_dump": 0.005042068841980725, "get_robot_state": 0.0037951731100315, "sim_render-ego0": 0.003929535063301645, "get_duckie_state": 1.731442242133908e-06, "in-drivable-lane": 1.899999999999995, "deviation-heading": 0.9941553388382084, "agent_compute-ego0": 0.012432876156597602, "complete-iteration": 0.17252010543171953, "set_robot_commands": 0.002244378008493563, "deviation-center-line": 0.5208601028005717, "driven_lanedir_consec": 2.029530283703435, "sim_compute_sim_state": 0.009430049396142727, "sim_compute_performance-ego0": 0.002140324290205793}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.2613482879554334, "get_ui_image": 0.03342911284020607, "step_physics": 0.12141540202688664, "survival_time": 4.6499999999999915, "driven_lanedir": 0.368197102721018, "get_state_dump": 0.004778362335042751, "get_robot_state": 0.0036396396921036097, "sim_render-ego0": 0.003809878166685713, "get_duckie_state": 1.6004481214158078e-06, "in-drivable-lane": 2.699999999999993, "deviation-heading": 1.3772308129971225, "agent_compute-ego0": 0.012452579559163844, "complete-iteration": 0.1932264896149331, "set_robot_commands": 0.002211892858464667, "deviation-center-line": 0.15461501015929013, "driven_lanedir_consec": 0.368197102721018, "sim_compute_sim_state": 0.009341298265660065, "sim_compute_performance-ego0": 0.002052096610373639}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.4360525949292078, "get_ui_image": 0.03320542744227818, "step_physics": 0.1265028749193464, "survival_time": 5.1999999999999895, "driven_lanedir": 0.659635841451105, "get_state_dump": 0.005403109959193638, "get_robot_state": 0.004316604705083938, "sim_render-ego0": 0.004331270853678386, "get_duckie_state": 1.784733363560268e-06, "in-drivable-lane": 2.84999999999999, "deviation-heading": 0.4005089425206144, "agent_compute-ego0": 0.013889285496303014, "complete-iteration": 0.20154266130356563, "set_robot_commands": 0.0023638543628510976, "deviation-center-line": 0.0930013390966274, "driven_lanedir_consec": 0.659635841451105, "sim_compute_sim_state": 0.009121792657034736, "sim_compute_performance-ego0": 0.002299983160836356}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.8553323167073643, "get_ui_image": 0.025320930406451225, "step_physics": 0.0899541899561882, "survival_time": 6.349999999999985, "driven_lanedir": 0.6776461595025709, "get_state_dump": 0.004730222746729851, "get_robot_state": 0.003633657470345497, "sim_render-ego0": 0.003708623349666595, "get_duckie_state": 1.6819685697555542e-06, "in-drivable-lane": 3.999999999999986, "deviation-heading": 0.37838610279828855, "agent_compute-ego0": 0.01198708638548851, "complete-iteration": 0.14865183643996716, "set_robot_commands": 0.0022688787430524826, "deviation-center-line": 0.10878608596075424, "driven_lanedir_consec": 0.6776461595025709, "sim_compute_sim_state": 0.004979047924280167, "sim_compute_performance-ego0": 0.001965725794434547}}
set_robot_commands_max0.0023638543628510976
set_robot_commands_mean0.002272250993215453
set_robot_commands_median0.002256628375773023
set_robot_commands_min0.002211892858464667
sim_compute_performance-ego0_max0.002299983160836356
sim_compute_performance-ego0_mean0.002114532463962584
sim_compute_performance-ego0_median0.002096210450289716
sim_compute_performance-ego0_min0.001965725794434547
sim_compute_sim_state_max0.009430049396142727
sim_compute_sim_state_mean0.008218047060779424
sim_compute_sim_state_median0.0092315454613474
sim_compute_sim_state_min0.004979047924280167
sim_render-ego0_max0.004331270853678386
sim_render-ego0_mean0.0039448268583330845
sim_render-ego0_median0.003869706614993679
sim_render-ego0_min0.003708623349666595
simulation-passed1
step_physics_max0.1265028749193464
step_physics_mean0.11095617190536408
step_physics_median0.11368381137296084
step_physics_min0.0899541899561882
survival_time_max8.14999999999998
survival_time_mean6.087499999999986
survival_time_min4.6499999999999915
No reset possible
56872LFv-simsuccessyes0:04:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
50753LFv-simerrorno0:01:45
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140291646767600
- M:video_aido:cmdline(in:/;out:/) 140296661408352
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40323LFv-simsuccessno0:06:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40322LFv-simsuccessno0:08:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40319LFv-simsuccessno0:08:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39684LFv-simsuccessno0:06:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38846LFv-simsuccessno0:07:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38845LFv-simsuccessno0:06:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible