Duckietown Challenges Home Challenges Submissions

Submission 10771

Submission10771
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 57906
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

57906

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
57906LFv-simsuccessyes0:09:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.621781725975468
survival_time_median13.02500000000005
deviation-center-line_median0.21445080765247332
in-drivable-lane_median6.175000000000047


other stats
agent_compute-ego0_max0.043115574663335626
agent_compute-ego0_mean0.02113947347914732
agent_compute-ego0_median0.01414032928378496
agent_compute-ego0_min0.01316166068568374
complete-iteration_max0.23641432997298567
complete-iteration_mean0.20656787575978372
complete-iteration_median0.2063379309894872
complete-iteration_min0.17718131108717486
deviation-center-line_max0.5913269389633796
deviation-center-line_mean0.28403482454058465
deviation-center-line_min0.11591074389401222
deviation-heading_max2.3498368880068665
deviation-heading_mean1.32562069923007
deviation-heading_median1.1513232531112747
deviation-heading_min0.6499994026908648
driven_any_max3.4568853170266074
driven_any_mean2.4702679645546066
driven_any_median2.487764309194299
driven_any_min1.4486579228032206
driven_lanedir_consec_max1.9848457236014516
driven_lanedir_consec_mean0.9494062299743724
driven_lanedir_consec_min0.5692157443451022
driven_lanedir_max1.9848457236014516
driven_lanedir_mean0.9494062299743724
driven_lanedir_median0.621781725975468
driven_lanedir_min0.5692157443451022
get_duckie_state_max1.7394758250615368e-06
get_duckie_state_mean1.4925411278251917e-06
get_duckie_state_median1.5192978953657085e-06
get_duckie_state_min1.1920928955078125e-06
get_robot_state_max0.00416860580444336
get_robot_state_mean0.0038813147196496466
get_robot_state_median0.003891397709965241
get_robot_state_min0.0035738576542247427
get_state_dump_max0.005177209148668263
get_state_dump_mean0.004801091808833065
get_state_dump_median0.004824901363923319
get_state_dump_min0.004377355358817361
get_ui_image_max0.03690870891917836
get_ui_image_mean0.03232901964817443
get_ui_image_median0.03376157611892799
get_ui_image_min0.0248842174356634
in-drivable-lane_max14.700000000000127
in-drivable-lane_mean7.912500000000052
in-drivable-lane_min4.599999999999986
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.9454487155118763, "get_ui_image": 0.03070379090387829, "step_physics": 0.11889480442890632, "survival_time": 15.10000000000008, "driven_lanedir": 1.9848457236014516, "get_state_dump": 0.004859430168328112, "get_robot_state": 0.003982855541871326, "sim_render-ego0": 0.004087130228678386, "get_duckie_state": 1.6051943939511138e-06, "in-drivable-lane": 4.90000000000007, "deviation-heading": 2.3498368880068665, "agent_compute-ego0": 0.01319489542013741, "complete-iteration": 0.19125463466833137, "set_robot_commands": 0.002357840931454901, "deviation-center-line": 0.5913269389633796, "driven_lanedir_consec": 1.9848457236014516, "sim_compute_sim_state": 0.010895998171060392, "sim_compute_performance-ego0": 0.0021809721150413994}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.4486579228032206, "get_ui_image": 0.03690870891917836, "step_physics": 0.1436965378847989, "survival_time": 8.199999999999982, "driven_lanedir": 0.5692157443451022, "get_state_dump": 0.0047903725595185255, "get_robot_state": 0.003799939878059156, "sim_render-ego0": 0.0040758378577954845, "get_duckie_state": 1.433401396780303e-06, "in-drivable-lane": 4.599999999999986, "deviation-heading": 1.6453546672078063, "agent_compute-ego0": 0.01316166068568374, "complete-iteration": 0.2214212273106431, "set_robot_commands": 0.002254416725852273, "deviation-center-line": 0.27150171713527665, "driven_lanedir_consec": 0.5692157443451022, "sim_compute_sim_state": 0.010528925693396366, "sim_compute_performance-ego0": 0.002111501404733369}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.4568853170266074, "get_ui_image": 0.03681936133397769, "step_physics": 0.15391023452967814, "survival_time": 18.200000000000124, "driven_lanedir": 0.6154076323060313, "get_state_dump": 0.005177209148668263, "get_robot_state": 0.00416860580444336, "sim_render-ego0": 0.004435918102525685, "get_duckie_state": 1.7394758250615368e-06, "in-drivable-lane": 14.700000000000127, "deviation-heading": 0.6572918390147426, "agent_compute-ego0": 0.015085763147432513, "complete-iteration": 0.23641432997298567, "set_robot_commands": 0.0025760787807098807, "deviation-center-line": 0.11591074389401222, "driven_lanedir_consec": 0.6154076323060313, "sim_compute_sim_state": 0.01183118754870271, "sim_compute_performance-ego0": 0.0023031979391019636}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.030079902876722, "get_ui_image": 0.0248842174356634, "step_physics": 0.08869839798320424, "survival_time": 10.95000000000002, "driven_lanedir": 0.6281558196449046, "get_state_dump": 0.004377355358817361, "get_robot_state": 0.0035738576542247427, "sim_render-ego0": 0.0036371469497680662, "get_duckie_state": 1.1920928955078125e-06, "in-drivable-lane": 7.450000000000025, "deviation-heading": 0.6499994026908648, "agent_compute-ego0": 0.043115574663335626, "complete-iteration": 0.17718131108717486, "set_robot_commands": 0.0021626277403397993, "deviation-center-line": 0.15739989816966996, "driven_lanedir_consec": 0.6281558196449046, "sim_compute_sim_state": 0.0048357898538762874, "sim_compute_performance-ego0": 0.0018135081637989392}}
set_robot_commands_max0.0025760787807098807
set_robot_commands_mean0.0023377410445892133
set_robot_commands_median0.002306128828653587
set_robot_commands_min0.0021626277403397993
sim_compute_performance-ego0_max0.0023031979391019636
sim_compute_performance-ego0_mean0.0021022949056689177
sim_compute_performance-ego0_median0.002146236759887384
sim_compute_performance-ego0_min0.0018135081637989392
sim_compute_sim_state_max0.01183118754870271
sim_compute_sim_state_mean0.00952297531675894
sim_compute_sim_state_median0.01071246193222838
sim_compute_sim_state_min0.0048357898538762874
sim_render-ego0_max0.004435918102525685
sim_render-ego0_mean0.004059008284691905
sim_render-ego0_median0.004081484043236935
sim_render-ego0_min0.0036371469497680662
simulation-passed1
step_physics_max0.15391023452967814
step_physics_mean0.1262999937066469
step_physics_median0.13129567115685262
step_physics_min0.08869839798320424
survival_time_max18.200000000000124
survival_time_mean13.11250000000005
survival_time_min8.199999999999982
No reset possible
57905LFv-simsuccessyes0:08:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
51725LFv-simerrorno0:07:40
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139953099919472
- M:video_aido:cmdline(in:/;out:/) 139953098856000
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40971LFv-simsuccessno0:08:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
37156LFv-simsuccessno0:09:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
37155LFv-simsuccessno0:10:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible