Duckietown Challenges Home Challenges Submissions

Submission 11298

Submission11298
Competingyes
Challengeaido5-LF-sim-validation
UserMoustafa Elarabi
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 55002
Next
User labelchallenge-aido_LF-template-pytorch
Admin priority50
Blessingn/a
User priority50

55002

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
55002LFv-simsuccessyes0:35:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median8.932845205543279
survival_time_median59.99999999999873
deviation-center-line_median3.0942712411530886
in-drivable-lane_median0.3749999999999787


other stats
agent_compute-ego0_max0.033702524575861564
agent_compute-ego0_mean0.030739558527213547
agent_compute-ego0_median0.030499488487529515
agent_compute-ego0_min0.028256732557933595
complete-iteration_max0.21217271092531584
complete-iteration_mean0.19555597986210196
complete-iteration_median0.2045565456474552
complete-iteration_min0.16093811722818163
deviation-center-line_max3.5880125062232615
deviation-center-line_mean3.0507645127799785
deviation-center-line_min2.4265030625904744
deviation-heading_max18.340005760208673
deviation-heading_mean14.327961347063985
deviation-heading_median14.083980232286152
deviation-heading_min10.80387916347496
driven_any_max11.071445623928309
driven_any_mean9.914517459971808
driven_any_median9.952165707333316
driven_any_min8.682292801292297
driven_lanedir_consec_max10.671205287328304
driven_lanedir_consec_mean9.17640265441694
driven_lanedir_consec_min8.168714919252896
driven_lanedir_max10.671205287328304
driven_lanedir_mean9.17640265441694
driven_lanedir_median8.932845205543279
driven_lanedir_min8.168714919252896
get_duckie_state_max1.4340847755451187e-06
get_duckie_state_mean1.2410272666556351e-06
get_duckie_state_median1.2407294915776567e-06
get_duckie_state_min1.0485653079221091e-06
get_robot_state_max0.003864689135333084
get_robot_state_mean0.003615169302807759
get_robot_state_median0.0036506898000178783
get_robot_state_min0.0032946084758621965
get_state_dump_max0.004970386562299768
get_state_dump_mean0.0046640689128840795
get_state_dump_median0.0047121120432235125
get_state_dump_min0.004261665002789525
get_ui_image_max0.03213800319922556
get_ui_image_mean0.028167212734015956
get_ui_image_median0.02889723642779627
get_ui_image_min0.022736374881245712
in-drivable-lane_max4.350000000000062
in-drivable-lane_mean1.2750000000000048
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.071445623928309, "get_ui_image": 0.026648439038901604, "step_physics": 0.11526869893768844, "survival_time": 59.99999999999873, "driven_lanedir": 9.271087267169223, "get_state_dump": 0.004970386562299768, "get_robot_state": 0.003864689135333084, "sim_render-ego0": 0.003989350289528217, "get_duckie_state": 1.4340847755451187e-06, "in-drivable-lane": 4.350000000000062, "deviation-heading": 18.340005760208673, "agent_compute-ego0": 0.02918926345418633, "complete-iteration": 0.19776964267028757, "set_robot_commands": 0.0024851073234107074, "deviation-center-line": 3.122847611583319, "driven_lanedir_consec": 9.271087267169223, "sim_compute_sim_state": 0.009021701463354716, "sim_compute_performance-ego0": 0.002230841551692559}, "LF-norm-zigzag-000-ego0": {"driven_any": 8.974241915798274, "get_ui_image": 0.03213800319922556, "step_physics": 0.11838680163311224, "survival_time": 59.99999999999873, "driven_lanedir": 8.594603143917334, "get_state_dump": 0.004567717433869094, "get_robot_state": 0.0035606971092764085, "sim_render-ego0": 0.00370906274781239, "get_duckie_state": 1.166682755520302e-06, "in-drivable-lane": 0.0, "deviation-heading": 14.389890642213048, "agent_compute-ego0": 0.033702524575861564, "complete-iteration": 0.21217271092531584, "set_robot_commands": 0.0022844603218504234, "deviation-center-line": 3.5880125062232615, "driven_lanedir_consec": 8.594603143917334, "sim_compute_sim_state": 0.01175749093467846, "sim_compute_performance-ego0": 0.001973694706836608}, "LF-norm-techtrack-000-ego0": {"driven_any": 8.682292801292297, "get_ui_image": 0.031146033816690944, "step_physics": 0.11843539237182008, "survival_time": 59.99999999999873, "driven_lanedir": 8.168714919252896, "get_state_dump": 0.00485650665257793, "get_robot_state": 0.003740682490759349, "sim_render-ego0": 0.003844420181325234, "get_duckie_state": 1.3147762276350112e-06, "in-drivable-lane": 0.7499999999999574, "deviation-heading": 13.778069822359251, "agent_compute-ego0": 0.0318097135208727, "complete-iteration": 0.2113434486246228, "set_robot_commands": 0.0024657858897009855, "deviation-center-line": 3.065694870722858, "driven_lanedir_consec": 8.168714919252896, "sim_compute_sim_state": 0.01284022831499924, "sim_compute_performance-ego0": 0.002099085409972789}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.93008949886836, "get_ui_image": 0.022736374881245712, "step_physics": 0.08948939070912026, "survival_time": 59.99999999999873, "driven_lanedir": 10.671205287328304, "get_state_dump": 0.004261665002789525, "get_robot_state": 0.0032946084758621965, "sim_render-ego0": 0.0033582670305491885, "get_duckie_state": 1.0485653079221091e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.80387916347496, "agent_compute-ego0": 0.028256732557933595, "complete-iteration": 0.16093811722818163, "set_robot_commands": 0.002189247931767066, "deviation-center-line": 2.4265030625904744, "driven_lanedir_consec": 10.671205287328304, "sim_compute_sim_state": 0.005553571905919059, "sim_compute_performance-ego0": 0.001715442917924638}}
set_robot_commands_max0.0024851073234107074
set_robot_commands_mean0.0023561503666822956
set_robot_commands_median0.0023751231057757044
set_robot_commands_min0.002189247931767066
sim_compute_performance-ego0_max0.002230841551692559
sim_compute_performance-ego0_mean0.0020047661466066485
sim_compute_performance-ego0_median0.0020363900584046985
sim_compute_performance-ego0_min0.001715442917924638
sim_compute_sim_state_max0.01284022831499924
sim_compute_sim_state_mean0.009793248154737866
sim_compute_sim_state_median0.010389596199016587
sim_compute_sim_state_min0.005553571905919059
sim_render-ego0_max0.003989350289528217
sim_render-ego0_mean0.0037252750623037578
sim_render-ego0_median0.003776741464568813
sim_render-ego0_min0.0033582670305491885
simulation-passed1
step_physics_max0.11843539237182008
step_physics_mean0.11039507091293524
step_physics_median0.11682775028540036
step_physics_min0.08948939070912026
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
54971LFv-simsuccessyes0:34:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
50060LFv-simerrorno0:10:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139638411724928
- M:video_aido:cmdline(in:/;out:/) 139638411713696
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41731LFv-simsuccessno0:09:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41729LFv-simsuccessno0:09:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible