Duckietown Challenges Home Challenges Submissions

Submission 11654

Submission11654
Competingyes
Challengeaido5-LF-sim-validation
UserDishank Bansal 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 54130
Next
User labelexercise_state_estimation
Admin priority50
Blessingn/a
User priority50

54130

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
54130LFv-simsuccessyes0:35:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median8.51394623929594
survival_time_median59.99999999999873
deviation-center-line_median2.556001555972421
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.012264150465457865
agent_compute-ego0_mean0.01192205504116468
agent_compute-ego0_median0.011979188450567928
agent_compute-ego0_min0.011465692798065008
complete-iteration_max0.2122838334378156
complete-iteration_mean0.1848034646290724
complete-iteration_median0.1806747065694207
complete-iteration_min0.16558061193963272
deviation-center-line_max2.9686298912733045
deviation-center-line_mean2.5548518266208737
deviation-center-line_min2.138774303265349
deviation-heading_max12.628040025147811
deviation-heading_mean10.311033159246442
deviation-heading_median9.790259350370537
deviation-heading_min9.035573911096886
driven_any_max9.52119092834424
driven_any_mean8.83058823548156
driven_any_median8.733811388596308
driven_any_min8.333539236389383
driven_lanedir_consec_max9.34319919907086
driven_lanedir_consec_mean8.634343374383684
driven_lanedir_consec_min8.166281819872
driven_lanedir_max9.34319919907086
driven_lanedir_mean8.634343374383684
driven_lanedir_median8.51394623929594
driven_lanedir_min8.166281819872
get_duckie_state_max1.8309196961313164e-06
get_duckie_state_mean1.7538455801145123e-06
get_duckie_state_median1.7332494705543232e-06
get_duckie_state_min1.7179636832180864e-06
get_robot_state_max0.003815753374568231
get_robot_state_mean0.003793273539864749
get_robot_state_median0.0037928732309015865
get_robot_state_min0.003771594323087592
get_state_dump_max0.0049148058514114626
get_state_dump_mean0.004835841608087189
get_state_dump_median0.0048380565087463735
get_state_dump_min0.004752447563444546
get_ui_image_max0.03669235271577732
get_ui_image_mean0.031148330307721496
get_ui_image_median0.03043069073203005
get_ui_image_min0.027039587051048565
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 9.52119092834424, "get_ui_image": 0.0280638771390637, "step_physics": 0.10155941663832588, "survival_time": 59.99999999999873, "driven_lanedir": 9.34319919907086, "get_state_dump": 0.004752447563444546, "get_robot_state": 0.003771594323087592, "sim_render-ego0": 0.003795196968351772, "get_duckie_state": 1.7306687532118415e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.326933394569815, "agent_compute-ego0": 0.011465692798065008, "complete-iteration": 0.16698667211000567, "set_robot_commands": 0.002279838654123476, "deviation-center-line": 2.9686298912733045, "driven_lanedir_consec": 9.34319919907086, "sim_compute_sim_state": 0.009171748339980964, "sim_compute_performance-ego0": 0.002035434398921106}, "LF-norm-zigzag-000-ego0": {"driven_any": 8.474078464772795, "get_ui_image": 0.03669235271577732, "step_physics": 0.13332613322458894, "survival_time": 59.99999999999873, "driven_lanedir": 8.208271599739602, "get_state_dump": 0.004824254435365345, "get_robot_state": 0.0037961018075553903, "sim_render-ego0": 0.003872675065891034, "get_duckie_state": 1.8309196961313164e-06, "in-drivable-lane": 0.0, "deviation-heading": 12.628040025147811, "agent_compute-ego0": 0.012164114516144688, "complete-iteration": 0.2122838334378156, "set_robot_commands": 0.002258999957133094, "deviation-center-line": 2.7896417714688484, "driven_lanedir_consec": 8.208271599739602, "sim_compute_sim_state": 0.013151430071243937, "sim_compute_performance-ego0": 0.0020973378672984916}, "LF-norm-techtrack-000-ego0": {"driven_any": 8.993544312419823, "get_ui_image": 0.0327975043249964, "step_physics": 0.11873331236700332, "survival_time": 59.99999999999873, "driven_lanedir": 8.81962087885228, "get_state_dump": 0.0049148058514114626, "get_robot_state": 0.0037896446542477823, "sim_render-ego0": 0.003886963703749479, "get_duckie_state": 1.7179636832180864e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.035573911096886, "agent_compute-ego0": 0.012264150465457865, "complete-iteration": 0.19436274102883577, "set_robot_commands": 0.0022650638488210507, "deviation-center-line": 2.3223613404759944, "driven_lanedir_consec": 8.81962087885228, "sim_compute_sim_state": 0.013511308920969078, "sim_compute_performance-ego0": 0.002100716820366674}, "LF-norm-small_loop-000-ego0": {"driven_any": 8.333539236389383, "get_ui_image": 0.027039587051048565, "step_physics": 0.10350456027365249, "survival_time": 59.99999999999873, "driven_lanedir": 8.166281819872, "get_state_dump": 0.004851858582127402, "get_robot_state": 0.003815753374568231, "sim_render-ego0": 0.003844804112659108, "get_duckie_state": 1.7358301878968047e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.253585306171257, "agent_compute-ego0": 0.011794262384991164, "complete-iteration": 0.16558061193963272, "set_robot_commands": 0.002309819046007803, "deviation-center-line": 2.138774303265349, "driven_lanedir_consec": 8.166281819872, "sim_compute_sim_state": 0.006287022891588553, "sim_compute_performance-ego0": 0.002041245181792781}}
set_robot_commands_max0.002309819046007803
set_robot_commands_mean0.002278430376521356
set_robot_commands_median0.002272451251472263
set_robot_commands_min0.002258999957133094
sim_compute_performance-ego0_max0.002100716820366674
sim_compute_performance-ego0_mean0.002068683567094763
sim_compute_performance-ego0_median0.002069291524545636
sim_compute_performance-ego0_min0.002035434398921106
sim_compute_sim_state_max0.013511308920969078
sim_compute_sim_state_mean0.010530377555945634
sim_compute_sim_state_median0.01116158920561245
sim_compute_sim_state_min0.006287022891588553
sim_render-ego0_max0.003886963703749479
sim_render-ego0_mean0.003849909962662848
sim_render-ego0_median0.003858739589275071
sim_render-ego0_min0.003795196968351772
simulation-passed1
step_physics_max0.13332613322458894
step_physics_mean0.11428085562589264
step_physics_median0.1111189363203279
step_physics_min0.10155941663832588
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
49193LFv-simerrorno0:11:11
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140038052388928
- M:video_aido:cmdline(in:/;out:/) 140038052276784
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43289LFv-simsuccessno0:09:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43288LFv-simsuccessno0:09:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43287LFv-simsuccessno0:09:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43286LFv-simsuccessno0:11:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
43285LFv-simsuccessno0:08:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible