Duckietown Challenges Home Challenges Submissions

Submission 9784

Submission9784
Competingyes
Challengeaido5-LF-sim-validation
UserCharlie Gauthier 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58080
Next
User labeltemplate-ros
Admin priority50
Blessingn/a
User priority50

58080

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58080LFv-simsuccessyes0:40:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.0459970517857482
survival_time_median59.99999999999873
deviation-center-line_median3.3149562778321577
in-drivable-lane_median27.099999999999408


other stats
agent_compute-ego0_max0.01169534765810494
agent_compute-ego0_mean0.011313017063792005
agent_compute-ego0_median0.011230095042277136
agent_compute-ego0_min0.0110965305125088
complete-iteration_max0.3160396174129896
complete-iteration_mean0.26434835893327646
complete-iteration_median0.26535151840546645
complete-iteration_min0.21065078150918343
deviation-center-line_max3.497357075702119
deviation-center-line_mean3.1690619604786896
deviation-center-line_min2.5489782105483227
deviation-heading_max25.805764659865662
deviation-heading_mean24.406973234545617
deviation-heading_median25.625319053816817
deviation-heading_min20.571490170683177
driven_any_max3.2030292102568043
driven_any_mean2.889016263785712
driven_any_median2.8361483709740147
driven_any_min2.6807391029380145
driven_lanedir_consec_max1.1417585735293485
driven_lanedir_consec_mean0.94025343182887
driven_lanedir_consec_min0.5272610502146349
driven_lanedir_max1.1417585735293485
driven_lanedir_mean1.049406253463037
driven_lanedir_median1.0459970517857482
driven_lanedir_min0.9638723367513036
get_duckie_state_max1.1478236672483215e-06
get_duckie_state_mean1.0987900377411726e-06
get_duckie_state_median1.0975989374292582e-06
get_duckie_state_min1.0521386088578529e-06
get_robot_state_max0.003674757470695502
get_robot_state_mean0.0035896552889472144
get_robot_state_median0.00356734076904119
get_robot_state_min0.0035491821470109747
get_state_dump_max0.004530894567726256
get_state_dump_mean0.00451893353839401
get_state_dump_median0.004521185015758607
get_state_dump_min0.004502469554332571
get_ui_image_max0.03637809380206538
get_ui_image_mean0.030642912101983825
get_ui_image_median0.030309367934234135
get_ui_image_min0.025574818737401653
in-drivable-lane_max33.34999999999934
in-drivable-lane_mean28.39999999999941
in-drivable-lane_min26.04999999999947
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.6807391029380145, "get_ui_image": 0.0281237841248016, "step_physics": 0.18352277411906348, "survival_time": 59.99999999999873, "driven_lanedir": 0.9897146638570612, "get_state_dump": 0.004530894567726256, "get_robot_state": 0.003674757470695502, "sim_render-ego0": 0.003661316102192265, "get_duckie_state": 1.135317113973219e-06, "in-drivable-lane": 26.849999999999447, "deviation-heading": 25.805764659865662, "agent_compute-ego0": 0.0110965305125088, "complete-iteration": 0.24744808564674448, "set_robot_commands": 0.0022131699904315576, "deviation-center-line": 3.140859176900409, "driven_lanedir_consec": 0.9897146638570612, "sim_compute_sim_state": 0.00862186238926515, "sim_compute_performance-ego0": 0.001922836510168325}, "LF-norm-zigzag-000-ego0": {"driven_any": 3.2030292102568043, "get_ui_image": 0.03637809380206538, "step_physics": 0.24056918317332657, "survival_time": 59.99999999999873, "driven_lanedir": 0.9638723367513036, "get_state_dump": 0.004502469554332571, "get_robot_state": 0.0035844158868210004, "sim_render-ego0": 0.003662288238563506, "get_duckie_state": 1.0598807608852971e-06, "in-drivable-lane": 33.34999999999934, "deviation-heading": 20.571490170683177, "agent_compute-ego0": 0.011312986988509128, "complete-iteration": 0.3160396174129896, "set_robot_commands": 0.0022970641483176656, "deviation-center-line": 3.497357075702119, "driven_lanedir_consec": 0.5272610502146349, "sim_compute_sim_state": 0.011743041696794625, "sim_compute_performance-ego0": 0.001917868033733892}, "LF-norm-techtrack-000-ego0": {"driven_any": 2.9664306682153683, "get_ui_image": 0.03249495174366668, "step_physics": 0.21423932416155175, "survival_time": 59.99999999999873, "driven_lanedir": 1.1417585735293485, "get_state_dump": 0.004526249077993071, "get_robot_state": 0.00355026565126138, "sim_render-ego0": 0.0036510956674491632, "get_duckie_state": 1.1478236672483215e-06, "in-drivable-lane": 27.34999999999937, "deviation-heading": 25.621441722727543, "agent_compute-ego0": 0.01169534765810494, "complete-iteration": 0.2832549511641884, "set_robot_commands": 0.002190032072805743, "deviation-center-line": 2.5489782105483227, "driven_lanedir_consec": 1.1417585735293485, "sim_compute_sim_state": 0.008927632926604234, "sim_compute_performance-ego0": 0.0019028339655969063}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.705866073732661, "get_ui_image": 0.025574818737401653, "step_physics": 0.1519916301762234, "survival_time": 59.99999999999873, "driven_lanedir": 1.102279439714435, "get_state_dump": 0.004516120953524143, "get_robot_state": 0.0035491821470109747, "sim_render-ego0": 0.003618549248459535, "get_duckie_state": 1.0521386088578529e-06, "in-drivable-lane": 26.04999999999947, "deviation-heading": 25.629196384906088, "agent_compute-ego0": 0.011147203096045144, "complete-iteration": 0.21065078150918343, "set_robot_commands": 0.0022417824036076502, "deviation-center-line": 3.489053378763907, "driven_lanedir_consec": 1.102279439714435, "sim_compute_sim_state": 0.006069095406703012, "sim_compute_performance-ego0": 0.0018718328007452693}}
set_robot_commands_max0.0022970641483176656
set_robot_commands_mean0.002235512153790654
set_robot_commands_median0.0022274761970196037
set_robot_commands_min0.002190032072805743
sim_compute_performance-ego0_max0.001922836510168325
sim_compute_performance-ego0_mean0.0019038428275610984
sim_compute_performance-ego0_median0.0019103509996653992
sim_compute_performance-ego0_min0.0018718328007452693
sim_compute_sim_state_max0.011743041696794625
sim_compute_sim_state_mean0.008840408104841754
sim_compute_sim_state_median0.008774747657934692
sim_compute_sim_state_min0.006069095406703012
sim_render-ego0_max0.003662288238563506
sim_render-ego0_mean0.0036483123141661175
sim_render-ego0_median0.003656205884820714
sim_render-ego0_min0.003618549248459535
simulation-passed1
step_physics_max0.24056918317332657
step_physics_mean0.1975807279075413
step_physics_median0.19888104914030763
step_physics_min0.1519916301762234
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
52101LFv-simerrorno0:10:36
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140020756750000
- M:video_aido:cmdline(in:/;out:/) 140020755123120
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41630LFv-simsuccessno0:09:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41628LFv-simsuccessno0:09:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38020LFv-simsuccessno0:10:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36184LFv-simsuccessno0:12:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35936LFv-simsuccessno0:07:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35586LFv-simsuccessno0:01:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35330LFv-simerrorno0:26:22
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9784/LFv-sim-reg04-c054faef3177-1-job35330/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible