Duckietown Challenges Home Challenges Submissions

Submission 9261

Submission9261
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte ðŸ‡¨ðŸ‡¦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58462
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58462

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job ID
step
status
up to date
date started
date completed
duration
message
58462LFv-simsuccessyes0:35:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median9.149661577503
survival_time_median59.99999999999873
deviation-center-line_median3.8694794856383536
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.012845380022364988
agent_compute-ego0_mean0.012631304506259
agent_compute-ego0_median0.012620921039660704
agent_compute-ego0_min0.012437995923349602
complete-iteration_max0.20255514326738777
complete-iteration_mean0.17864019909468817
complete-iteration_median0.17686369248373524
complete-iteration_min0.15827826814389448
deviation-center-line_max4.060564372604027
deviation-center-line_mean3.709214259264463
deviation-center-line_min3.0373336931771178
deviation-heading_max12.240754267001009
deviation-heading_mean10.843718160974635
deviation-heading_median11.68288056611905
deviation-heading_min7.76835724465944
driven_any_max11.312336994952757
driven_any_mean9.864685792262607
driven_any_median9.425548327777324
driven_any_min9.295309518543023
driven_lanedir_consec_max11.158396329103674
driven_lanedir_consec_mean9.5818402603761
driven_lanedir_consec_min8.869641557394726
driven_lanedir_max11.158396329103674
driven_lanedir_mean9.585522247322547
driven_lanedir_median9.149661577503
driven_lanedir_min8.88436950518052
get_duckie_state_max1.452943863817099e-06
get_duckie_state_mean1.4185012131309031e-06
get_duckie_state_median1.430511474609375e-06
get_duckie_state_min1.360038039487764e-06
get_robot_state_max0.003758993275854411
get_robot_state_mean0.003726185509604678
get_robot_state_median0.00372067557882012
get_robot_state_min0.0037043976049240582
get_state_dump_max0.004673678511683888
get_state_dump_mean0.00464739281370876
get_state_dump_median0.0046625466866854525
get_state_dump_min0.004590799369780249
get_ui_image_max0.03574775339264755
get_ui_image_mean0.03077811563541053
get_ui_image_median0.03047847777580242
get_ui_image_min0.026407753597389748
in-drivable-lane_max2.8999999999999666
in-drivable-lane_mean0.7249999999999917
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 11.312336994952757, "get_ui_image": 0.02855054345555746, "step_physics": 0.09609502499347722, "survival_time": 59.99999999999873, "driven_lanedir": 11.158396329103674, "get_state_dump": 0.004590799369780249, "get_robot_state": 0.003725677207546568, "sim_render-ego0": 0.003823911022087815, "get_duckie_state": 1.452943863817099e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.76835724465944, "agent_compute-ego0": 0.012503787341661797, "complete-iteration": 0.16290533274635485, "set_robot_commands": 0.0022866712025460556, "deviation-center-line": 3.0373336931771178, "driven_lanedir_consec": 11.158396329103674, "sim_compute_sim_state": 0.009174705842055448, "sim_compute_performance-ego0": 0.0020697970473696845}, "LF-norm-zigzag-000-ego0": {"driven_any": 9.295309518543023, "get_ui_image": 0.03574775339264755, "step_physics": 0.12459027479332156, "survival_time": 59.99999999999873, "driven_lanedir": 9.051200075733076, "get_state_dump": 0.004659470868646652, "get_robot_state": 0.0037043976049240582, "sim_render-ego0": 0.0038985805050915823, "get_duckie_state": 1.360038039487764e-06, "in-drivable-lane": 0.0, "deviation-heading": 12.240754267001009, "agent_compute-ego0": 0.012738054737659616, "complete-iteration": 0.20255514326738777, "set_robot_commands": 0.0022597493577460066, "deviation-center-line": 4.005477585331246, "driven_lanedir_consec": 9.051200075733076, "sim_compute_sim_state": 0.012821906611484652, "sim_compute_performance-ego0": 0.0020501387307884093}, "LF-norm-techtrack-000-ego0": {"driven_any": 9.503272818369046, "get_ui_image": 0.03240641209604738, "step_physics": 0.11584470988709564, "survival_time": 59.99999999999873, "driven_lanedir": 9.248123079272926, "get_state_dump": 0.004673678511683888, "get_robot_state": 0.003715673950093672, "sim_render-ego0": 0.00384932791164376, "get_duckie_state": 1.4362684594502954e-06, "in-drivable-lane": 0.0, "deviation-heading": 12.220022313648457, "agent_compute-ego0": 0.012845380022364988, "complete-iteration": 0.1908220522211156, "set_robot_commands": 0.002290508332200888, "deviation-center-line": 4.060564372604027, "driven_lanedir_consec": 9.248123079272926, "sim_compute_sim_state": 0.013019387668415866, "sim_compute_performance-ego0": 0.00208978529873736}, "LF-norm-small_loop-000-ego0": {"driven_any": 9.347823837185604, "get_ui_image": 0.026407753597389748, "step_physics": 0.09654820948020308, "survival_time": 59.99999999999873, "driven_lanedir": 8.88436950518052, "get_state_dump": 0.004665622504724253, "get_robot_state": 0.003758993275854411, "sim_render-ego0": 0.0038559875520043927, "get_duckie_state": 1.4247544897684546e-06, "in-drivable-lane": 2.8999999999999666, "deviation-heading": 11.145738818589642, "agent_compute-ego0": 0.012437995923349602, "complete-iteration": 0.15827826814389448, "set_robot_commands": 0.00226906356366846, "deviation-center-line": 3.733481385945461, "driven_lanedir_consec": 8.869641557394726, "sim_compute_sim_state": 0.006212162038468004, "sim_compute_performance-ego0": 0.00203879846323539}}
set_robot_commands_max0.002290508332200888
set_robot_commands_mean0.0022764981140403527
set_robot_commands_median0.002277867383107258
set_robot_commands_min0.0022597493577460066
sim_compute_performance-ego0_max0.00208978529873736
sim_compute_performance-ego0_mean0.0020621298850327105
sim_compute_performance-ego0_median0.0020599678890790467
sim_compute_performance-ego0_min0.00203879846323539
sim_compute_sim_state_max0.013019387668415866
sim_compute_sim_state_mean0.010307040540105991
sim_compute_sim_state_median0.01099830622677005
sim_compute_sim_state_min0.006212162038468004
sim_render-ego0_max0.0038985805050915823
sim_render-ego0_mean0.003856951747706888
sim_render-ego0_median0.003852657731824076
sim_render-ego0_min0.003823911022087815
simulation-passed1
step_physics_max0.12459027479332156
step_physics_mean0.10826955478852436
step_physics_median0.10619645968364937
step_physics_min0.09609502499347722
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
58459LFv-simsuccessyes0:35:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52417LFv-simerrorno0:09:00
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140486070027024
- M:video_aido:cmdline(in:/;out:/) 140486070026304
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52405LFv-simerrorno0:10:23
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139750144417456
- M:video_aido:cmdline(in:/;out:/) 139750144424544
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41770LFv-simsuccessno0:09:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38283LFv-simsuccessno0:11:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38282LFv-simsuccessno0:10:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36386LFv-simerrorno0:00:43
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-Sandy1-sandy-1-job36386-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35817LFv-simerrorno0:00:48
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1063, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-noname-sandy-1-job35817-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35815LFv-simsuccessno0:00:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35406LFv-simerrorno0:23:24
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9261/LFv-sim-reg04-c054faef3177-1-job35406/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35049LFv-simsuccessno0:23:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34558LFv-simsuccessno0:26:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible