Duckietown Challenges Home Challenges Submissions

Submission 10738

Submission10738
Competingyes
Challengeaido5-LF-sim-validation
UserDishank Bansal 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 57941
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

57941

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
57941LFv-simsuccessyes0:23:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median5.449102107204245
survival_time_median53.699999999999086
deviation-center-line_median2.2240170009268914
in-drivable-lane_median1.450000000000002


other stats
agent_compute-ego0_max0.012923770480685765
agent_compute-ego0_mean0.012267156304185949
agent_compute-ego0_median0.01221893342581909
agent_compute-ego0_min0.011706987884419843
complete-iteration_max0.22426645717923605
complete-iteration_mean0.17046263399297795
complete-iteration_median0.15972547389702024
complete-iteration_min0.13813313099863528
deviation-center-line_max4.507933751695632
deviation-center-line_mean2.26455496268841
deviation-center-line_min0.10225209720422272
deviation-heading_max10.660689272527492
deviation-heading_mean6.097199364848016
deviation-heading_median6.297626984030909
deviation-heading_min1.1328542188027533
driven_any_max8.338069146308921
driven_any_mean5.859109853657045
driven_any_median7.4052318412203535
driven_any_min0.2879065858785484
driven_lanedir_consec_max8.106879722772375
driven_lanedir_consec_mean4.782415596214744
driven_lanedir_consec_min0.12457844767811156
driven_lanedir_max8.106879722772375
driven_lanedir_mean4.782415596214744
driven_lanedir_median5.449102107204245
driven_lanedir_min0.12457844767811156
get_duckie_state_max1.1126200358072915e-06
get_duckie_state_mean1.0799353249254958e-06
get_duckie_state_median1.0981370662582713e-06
get_duckie_state_min1.0108471313781484e-06
get_robot_state_max0.0034460574760722877
get_robot_state_mean0.003426293085466552
get_robot_state_median0.0034227661822239007
get_robot_state_min0.003413582501346118
get_state_dump_max0.004238025433415676
get_state_dump_mean0.004191341337769063
get_state_dump_median0.004194849415199655
get_state_dump_min0.004137641087261267
get_ui_image_max0.03556014242626372
get_ui_image_mean0.03006109278918039
get_ui_image_median0.029685548259117137
get_ui_image_min0.025313132212223557
in-drivable-lane_max24.799999999999795
in-drivable-lane_mean7.149999999999937
in-drivable-lane_min0.8999999999999488
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 8.331165551043027, "get_ui_image": 0.027480889121062748, "step_physics": 0.08526836604897327, "survival_time": 59.99999999999873, "driven_lanedir": 8.106879722772375, "get_state_dump": 0.004137641087261267, "get_robot_state": 0.0034460574760722877, "sim_render-ego0": 0.0035368826466734265, "get_duckie_state": 1.0108471313781484e-06, "in-drivable-lane": 0.8999999999999488, "deviation-heading": 6.663275948505493, "agent_compute-ego0": 0.011816479185042432, "complete-iteration": 0.14895332584174645, "set_robot_commands": 0.002040101526976624, "deviation-center-line": 2.837169670047046, "driven_lanedir_consec": 8.106879722772375, "sim_compute_sim_state": 0.009320252146153127, "sim_compute_performance-ego0": 0.001838061930634993}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.2879065858785484, "get_ui_image": 0.03556014242626372, "step_physics": 0.15140326439388216, "survival_time": 3.099999999999997, "driven_lanedir": 0.12457844767811156, "get_state_dump": 0.00420636222476051, "get_robot_state": 0.003417121039496528, "sim_render-ego0": 0.003566673823765346, "get_duckie_state": 1.1126200358072915e-06, "in-drivable-lane": 1.4999999999999962, "deviation-heading": 1.1328542188027533, "agent_compute-ego0": 0.012923770480685765, "complete-iteration": 0.22426645717923605, "set_robot_commands": 0.0020596981048583984, "deviation-center-line": 0.10225209720422272, "driven_lanedir_consec": 0.12457844767811156, "sim_compute_sim_state": 0.009257218194386315, "sim_compute_performance-ego0": 0.0018022779434446305}, "LF-norm-techtrack-000-ego0": {"driven_any": 6.47929813139768, "get_ui_image": 0.03189020739717152, "step_physics": 0.0998480450617124, "survival_time": 47.39999999999944, "driven_lanedir": 2.9499961335764344, "get_state_dump": 0.0041833366056388, "get_robot_state": 0.003413582501346118, "sim_render-ego0": 0.0035766070459363835, "get_duckie_state": 1.0978811282126746e-06, "in-drivable-lane": 24.799999999999795, "deviation-heading": 5.9319780195563245, "agent_compute-ego0": 0.012621387666595749, "complete-iteration": 0.17049762195229404, "set_robot_commands": 0.0020048947429757475, "deviation-center-line": 1.610864331806737, "driven_lanedir_consec": 2.9499961335764344, "sim_compute_sim_state": 0.011076566668029078, "sim_compute_performance-ego0": 0.0018135616977800184}, "LF-norm-small_loop-000-ego0": {"driven_any": 8.338069146308921, "get_ui_image": 0.025313132212223557, "step_physics": 0.08007445899175665, "survival_time": 59.99999999999873, "driven_lanedir": 7.948208080832056, "get_state_dump": 0.004238025433415676, "get_robot_state": 0.0034284113249512735, "sim_render-ego0": 0.0035245388771076185, "get_duckie_state": 1.098393004303868e-06, "in-drivable-lane": 1.4000000000000077, "deviation-heading": 10.660689272527492, "agent_compute-ego0": 0.011706987884419843, "complete-iteration": 0.13813313099863528, "set_robot_commands": 0.002026413203675384, "deviation-center-line": 4.507933751695632, "driven_lanedir_consec": 7.948208080832056, "sim_compute_sim_state": 0.005940446647180308, "sim_compute_performance-ego0": 0.0018121459700483568}}
set_robot_commands_max0.0020596981048583984
set_robot_commands_mean0.0020327768946215383
set_robot_commands_median0.002033257365326004
set_robot_commands_min0.0020048947429757475
sim_compute_performance-ego0_max0.001838061930634993
sim_compute_performance-ego0_mean0.0018165118854769997
sim_compute_performance-ego0_median0.0018128538339141872
sim_compute_performance-ego0_min0.0018022779434446305
sim_compute_sim_state_max0.011076566668029078
sim_compute_sim_state_mean0.008898620913937207
sim_compute_sim_state_median0.00928873517026972
sim_compute_sim_state_min0.005940446647180308
sim_render-ego0_max0.0035766070459363835
sim_render-ego0_mean0.0035511755983706934
sim_render-ego0_median0.0035517782352193863
sim_render-ego0_min0.0035245388771076185
simulation-passed1
step_physics_max0.15140326439388216
step_physics_mean0.10414853362408112
step_physics_median0.09255820555534285
step_physics_min0.08007445899175665
survival_time_max59.99999999999873
survival_time_mean42.624999999999226
survival_time_min3.099999999999997
No reset possible
57940LFv-simsuccessyes0:24:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
51777LFv-simerrorno0:10:35
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140511159497872
- M:video_aido:cmdline(in:/;out:/) 140511159497536
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41021LFv-simsuccessno0:08:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36665LFv-simsuccessno0:11:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible