Duckietown Challenges Home Challenges Submissions

Submission 9301

Submission9301
Competingyes
Challengeaido5-LF-sim-validation
UserRaphael Jean
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58359
Next
User labelreal-exercise-1
Admin priority50
Blessingn/a
User priority50

58359

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58359LFv-simsuccessyes0:37:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median8.848608282655686
survival_time_median59.99999999999873
deviation-center-line_median2.684077679430567
in-drivable-lane_median0.6749999999999869


other stats
agent_compute-ego0_max0.04846969849858851
agent_compute-ego0_mean0.030188821833099948
agent_compute-ego0_median0.030289012625453672
agent_compute-ego0_min0.011707563582903936
complete-iteration_max0.2693807594385076
complete-iteration_mean0.2143661067646608
complete-iteration_median0.20946002621932588
complete-iteration_min0.1691636151814838
deviation-center-line_max2.8534682288672433
deviation-center-line_mean2.6139023280106772
deviation-center-line_min2.233985724314331
deviation-heading_max13.168424869550703
deviation-heading_mean12.231655735315188
deviation-heading_median11.984277508079895
deviation-heading_min11.789643055550252
driven_any_max10.093765559409016
driven_any_mean9.947497714253457
driven_any_median9.945514726411506
driven_any_min9.805195844781792
driven_lanedir_consec_max9.736859100019196
driven_lanedir_consec_mean8.64356619667307
driven_lanedir_consec_min7.140189121361715
driven_lanedir_max9.736859100019196
driven_lanedir_mean9.531779678579978
driven_lanedir_median9.514127151385818
driven_lanedir_min9.362005311529083
get_duckie_state_max1.3534869877722342e-06
get_duckie_state_mean1.2797380267928585e-06
get_duckie_state_median1.2934356803798754e-06
get_duckie_state_min1.1785937586394477e-06
get_robot_state_max0.0038965596048957
get_robot_state_mean0.003739715317306074
get_robot_state_median0.0037902269037835903
get_robot_state_min0.0034818478567614146
get_state_dump_max0.005014308386301617
get_state_dump_mean0.004730829589075093
get_state_dump_median0.00470856514501135
get_state_dump_min0.004491879679976058
get_ui_image_max0.03830657831139608
get_ui_image_mean0.031516580508213855
get_ui_image_median0.0305417754270949
get_ui_image_min0.026676192867269525
in-drivable-lane_max1.1999999999999955
in-drivable-lane_mean0.6374999999999924
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 10.05223273454044, "get_ui_image": 0.026676192867269525, "step_physics": 0.10652094200985517, "survival_time": 59.99999999999873, "driven_lanedir": 9.736859100019196, "get_state_dump": 0.004491879679976058, "get_robot_state": 0.0034818478567614146, "sim_render-ego0": 0.003575421689849015, "get_duckie_state": 1.2405309748590042e-06, "in-drivable-lane": 0.0, "deviation-heading": 11.90953279575455, "agent_compute-ego0": 0.011707563582903936, "complete-iteration": 0.1691636151814838, "set_robot_commands": 0.002093791366119766, "deviation-center-line": 2.233985724314331, "driven_lanedir_consec": 9.736859100019196, "sim_compute_sim_state": 0.008679498740775103, "sim_compute_performance-ego0": 0.0018523896762870137}, "LF-norm-zigzag-000-ego0": {"driven_any": 9.805195844781792, "get_ui_image": 0.03830657831139608, "step_physics": 0.1512320817856864, "survival_time": 59.99999999999873, "driven_lanedir": 9.365800927087497, "get_state_dump": 0.005014308386301617, "get_robot_state": 0.003885897271936879, "sim_render-ego0": 0.004006939267834259, "get_duckie_state": 1.346340385900747e-06, "in-drivable-lane": 1.1999999999999955, "deviation-heading": 12.059022220405245, "agent_compute-ego0": 0.04841986266302129, "complete-iteration": 0.2693807594385076, "set_robot_commands": 0.002514361739654922, "deviation-center-line": 2.8534682288672433, "driven_lanedir_consec": 9.365800927087497, "sim_compute_sim_state": 0.013724130952884316, "sim_compute_performance-ego0": 0.0021778147583897166}, "LF-norm-techtrack-000-ego0": {"driven_any": 9.838796718282577, "get_ui_image": 0.03197622140380961, "step_physics": 0.1256895277720506, "survival_time": 59.99999999999873, "driven_lanedir": 9.362005311529083, "get_state_dump": 0.0045755385955505625, "get_robot_state": 0.0036945565356303017, "sim_render-ego0": 0.003812917563242281, "get_duckie_state": 1.1785937586394477e-06, "in-drivable-lane": 0.9499999999999966, "deviation-heading": 13.168424869550703, "agent_compute-ego0": 0.01215816258788605, "complete-iteration": 0.19877095405108525, "set_robot_commands": 0.002266969013769958, "deviation-center-line": 2.580572758472094, "driven_lanedir_consec": 8.331415638223874, "sim_compute_sim_state": 0.012461982102914215, "sim_compute_performance-ego0": 0.002050232629196332}, "LF-norm-small_loop-000-ego0": {"driven_any": 10.093765559409016, "get_ui_image": 0.029107329450380196, "step_physics": 0.11843179405777778, "survival_time": 59.99999999999873, "driven_lanedir": 9.66245337568414, "get_state_dump": 0.004841591694472136, "get_robot_state": 0.0038965596048957, "sim_render-ego0": 0.004049525272836296, "get_duckie_state": 1.3534869877722342e-06, "in-drivable-lane": 0.39999999999997726, "deviation-heading": 11.789643055550252, "agent_compute-ego0": 0.04846969849858851, "complete-iteration": 0.22014909838756652, "set_robot_commands": 0.002499165284842079, "deviation-center-line": 2.7875826003890403, "driven_lanedir_consec": 7.140189121361715, "sim_compute_sim_state": 0.006582555524713292, "sim_compute_performance-ego0": 0.0021742345093688203}}
set_robot_commands_max0.002514361739654922
set_robot_commands_mean0.002343571851096681
set_robot_commands_median0.0023830671493060185
set_robot_commands_min0.002093791366119766
sim_compute_performance-ego0_max0.0021778147583897166
sim_compute_performance-ego0_mean0.0020636678933104707
sim_compute_performance-ego0_median0.002112233569282576
sim_compute_performance-ego0_min0.0018523896762870137
sim_compute_sim_state_max0.013724130952884316
sim_compute_sim_state_mean0.010362041830321732
sim_compute_sim_state_median0.01057074042184466
sim_compute_sim_state_min0.006582555524713292
sim_render-ego0_max0.004049525272836296
sim_render-ego0_mean0.003861200948440463
sim_render-ego0_median0.0039099284155382705
sim_render-ego0_min0.003575421689849015
simulation-passed1
step_physics_max0.1512320817856864
step_physics_mean0.1254685864063425
step_physics_median0.1220606609149142
step_physics_min0.10652094200985517
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
58357LFv-simsuccessyes0:36:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58356LFv-simsuccessyes0:20:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58353LFv-simsuccessyes0:28:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58351LFv-simsuccessyes0:24:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58350LFv-simsuccessyes0:22:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52387LFv-simerrorno0:04:24
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139705483936192
- M:video_aido:cmdline(in:/;out:/) 139705483936768
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52366LFv-simerrorno0:10:32
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140440469879728
- M:video_aido:cmdline(in:/;out:/) 140440470213248
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52360LFv-simtimeoutno0:11:36
Timeout because eval [...]
Timeout because evaluator contacted us
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41746LFv-simsuccessno0:09:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38223LFv-simsuccessno0:10:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38222LFv-simsuccessno0:10:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36346LFv-simsuccessno0:10:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36345LFv-simerrorno0:00:51
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-Sandy2-sandy-1-job36345-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35773LFv-simsuccessno0:01:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35378LFv-simerrorno0:24:55
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg01-94a6fab21ac9-1-job35378/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35377LFv-simerrorno0:22:42
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9301/LFv-sim-reg04-c054faef3177-1-job35377/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35010LFv-simsuccessno0:24:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34640LFv-simsuccessno0:22:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34639LFv-simsuccessno0:23:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible