Duckietown Challenges Home Challenges Submissions

Submission 10063

Submission10063
Competingyes
Challengeaido5-LF-sim-validation
UserDaniil Lisus
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 57979
Next
User labelsim-exercise-1
Admin priority50
Blessingn/a
User priority50

57979

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
57979LFv-simsuccessyes0:26:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.79890022415042
survival_time_median49.34999999999933
deviation-center-line_median2.719570801058751
in-drivable-lane_median20.749999999999655


other stats
agent_compute-ego0_max0.012720518292121168
agent_compute-ego0_mean0.012320964411093371
agent_compute-ego0_median0.012277611721237898
agent_compute-ego0_min0.01200811590977652
complete-iteration_max0.18590921094340665
complete-iteration_mean0.16733301383299046
complete-iteration_median0.1728049127845407
complete-iteration_min0.13781301881947386
deviation-center-line_max3.694394068492218
deviation-center-line_mean2.6894917703651564
deviation-center-line_min1.624431410850905
deviation-heading_max13.745222910173869
deviation-heading_mean9.066379809075393
deviation-heading_median9.34312769926657
deviation-heading_min3.83404092759456
driven_any_max6.764695115067394
driven_any_mean4.645510891778469
driven_any_median4.538690963855758
driven_any_min2.739966524334965
driven_lanedir_consec_max2.5492986827342072
driven_lanedir_consec_mean1.8165358093287551
driven_lanedir_consec_min1.119044106279974
driven_lanedir_max2.972258602201107
driven_lanedir_mean2.19047961387247
driven_lanedir_median2.1752259265771796
driven_lanedir_min1.4392080001344143
get_duckie_state_max1.395873303683299e-06
get_duckie_state_mean1.3608460792570352e-06
get_duckie_state_median1.3614044748886046e-06
get_duckie_state_min1.3247020635676323e-06
get_robot_state_max0.003819535363395259
get_robot_state_mean0.0036325599229384783
get_robot_state_median0.003613319916958384
get_robot_state_min0.003484064494441887
get_state_dump_max0.004770329313458136
get_state_dump_mean0.004596288229300707
get_state_dump_median0.004566747742199225
get_state_dump_min0.0044813281193462435
get_ui_image_max0.03447268978241951
get_ui_image_mean0.02996292457643647
get_ui_image_median0.03035416470831297
get_ui_image_min0.02467067910670043
in-drivable-lane_max31.14999999999895
in-drivable-lane_mean21.0999999999996
in-drivable-lane_min11.750000000000131
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 5.9181242565873085, "get_ui_image": 0.027362254934445905, "step_physics": 0.09601733805634992, "survival_time": 59.99999999999873, "driven_lanedir": 2.5611058885864457, "get_state_dump": 0.004548209989993995, "get_robot_state": 0.0035734079362550047, "sim_render-ego0": 0.003770231307297325, "get_duckie_state": 1.3556706716774108e-06, "in-drivable-lane": 27.949999999999555, "deviation-heading": 13.745222910173869, "agent_compute-ego0": 0.012050129591078682, "complete-iteration": 0.16179305568126515, "set_robot_commands": 0.002140538480061476, "deviation-center-line": 3.0710508638515237, "driven_lanedir_consec": 2.147846694883144, "sim_compute_sim_state": 0.010305578563731476, "sim_compute_performance-ego0": 0.001936155195339435}, "LF-norm-zigzag-000-ego0": {"driven_any": 3.159257671124207, "get_ui_image": 0.03447268978241951, "step_physics": 0.11028587156726467, "survival_time": 38.69999999999994, "driven_lanedir": 1.789345964567914, "get_state_dump": 0.004585285494404455, "get_robot_state": 0.003653231897661763, "sim_render-ego0": 0.003922632894208355, "get_duckie_state": 1.3671382780997984e-06, "in-drivable-lane": 13.549999999999756, "deviation-heading": 11.702125382588498, "agent_compute-ego0": 0.012505093851397114, "complete-iteration": 0.18590921094340665, "set_robot_commands": 0.00223548827632781, "deviation-center-line": 2.3680907382659786, "driven_lanedir_consec": 1.449953753417696, "sim_compute_sim_state": 0.012109902597242784, "sim_compute_performance-ego0": 0.0020477867126464846}, "LF-norm-techtrack-000-ego0": {"driven_any": 2.739966524334965, "get_ui_image": 0.033346074482180035, "step_physics": 0.11104788870181675, "survival_time": 26.45000000000024, "driven_lanedir": 1.4392080001344143, "get_state_dump": 0.004770329313458136, "get_robot_state": 0.003819535363395259, "sim_render-ego0": 0.004044144558456709, "get_duckie_state": 1.395873303683299e-06, "in-drivable-lane": 11.750000000000131, "deviation-heading": 3.83404092759456, "agent_compute-ego0": 0.012720518292121168, "complete-iteration": 0.18381676988781623, "set_robot_commands": 0.002281298727359412, "deviation-center-line": 1.624431410850905, "driven_lanedir_consec": 1.119044106279974, "sim_compute_sim_state": 0.009617900848388672, "sim_compute_performance-ego0": 0.0020740387574681696}, "LF-norm-small_loop-000-ego0": {"driven_any": 6.764695115067394, "get_ui_image": 0.02467067910670043, "step_physics": 0.07961117020256811, "survival_time": 59.99999999999873, "driven_lanedir": 2.972258602201107, "get_state_dump": 0.0044813281193462435, "get_robot_state": 0.003484064494441887, "sim_render-ego0": 0.003667273588919024, "get_duckie_state": 1.3247020635676323e-06, "in-drivable-lane": 31.14999999999895, "deviation-heading": 6.9841300159446424, "agent_compute-ego0": 0.01200811590977652, "complete-iteration": 0.13781301881947386, "set_robot_commands": 0.002036953647368953, "deviation-center-line": 3.694394068492218, "driven_lanedir_consec": 2.5492986827342072, "sim_compute_sim_state": 0.005925380022301464, "sim_compute_performance-ego0": 0.001844845048394628}}
set_robot_commands_max0.002281298727359412
set_robot_commands_mean0.0021735697827794127
set_robot_commands_median0.002188013378194643
set_robot_commands_min0.002036953647368953
sim_compute_performance-ego0_max0.0020740387574681696
sim_compute_performance-ego0_mean0.001975706428462179
sim_compute_performance-ego0_median0.0019919709539929597
sim_compute_performance-ego0_min0.001844845048394628
sim_compute_sim_state_max0.012109902597242784
sim_compute_sim_state_mean0.0094896905079161
sim_compute_sim_state_median0.009961739706060076
sim_compute_sim_state_min0.005925380022301464
sim_render-ego0_max0.004044144558456709
sim_render-ego0_mean0.003851070587220353
sim_render-ego0_median0.00384643210075284
sim_render-ego0_min0.003667273588919024
simulation-passed1
step_physics_max0.11104788870181675
step_physics_mean0.09924056713199986
step_physics_median0.1031516048118073
step_physics_min0.07961117020256811
survival_time_max59.99999999999873
survival_time_mean46.287499999999405
survival_time_min26.45000000000024
No reset possible
51897LFv-simerrorno0:04:24
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140653694844016
- M:video_aido:cmdline(in:/;out:/) 140653696130592
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41517LFv-simsuccessno0:08:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
37172LFv-simsuccessno0:10:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36597LFv-simsuccessno0:10:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36596LFv-simsuccessno0:10:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible