Duckietown Challenges Home Challenges Submissions

Submission 9314

Submission9314
Competingyes
Challengeaido5-LF-sim-validation
UserJerome Labonte 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58239
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58239

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58239LFv-simsuccessyes0:30:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median5.597215610362803
survival_time_median58.54999999999881
deviation-center-line_median3.667435746445787
in-drivable-lane_median8.674999999999761


other stats
agent_compute-ego0_max0.013881672496985575
agent_compute-ego0_mean0.012758680885240944
agent_compute-ego0_median0.012509422337979102
agent_compute-ego0_min0.012134206368019993
complete-iteration_max0.2048968075713349
complete-iteration_mean0.18143096070071324
complete-iteration_median0.18473730514226763
complete-iteration_min0.15135242494698273
deviation-center-line_max4.3857215721510885
deviation-center-line_mean3.32544651899513
deviation-center-line_min1.5811930109378547
deviation-heading_max22.983679004873395
deviation-heading_mean14.188053285155242
deviation-heading_median13.624619348090008
deviation-heading_min6.519295439567559
driven_any_max12.283007711678422
driven_any_mean10.60802423591451
driven_any_median11.215808675796364
driven_any_min7.717471880386884
driven_lanedir_consec_max6.949097787916864
driven_lanedir_consec_mean5.490788535566745
driven_lanedir_consec_min3.819625133624512
driven_lanedir_max11.38190251750487
driven_lanedir_mean9.021566038400394
driven_lanedir_median9.106152697023152
driven_lanedir_min6.492056242050406
get_duckie_state_max1.748289865246877e-06
get_duckie_state_mean1.39156186492325e-06
get_duckie_state_median1.2956193642850525e-06
get_duckie_state_min1.2267188658760184e-06
get_robot_state_max0.0039960608882551344
get_robot_state_mean0.003743812856305925
get_robot_state_median0.0036806557404756007
get_robot_state_min0.0036178790560173657
get_state_dump_max0.005175560331276775
get_state_dump_mean0.004743915327006082
get_state_dump_median0.004642059463545444
get_state_dump_min0.004515982049656665
get_ui_image_max0.034839424066599164
get_ui_image_mean0.03065442812094983
get_ui_image_median0.031230782833737536
get_ui_image_min0.0253167227497251
in-drivable-lane_max10.749999999999698
in-drivable-lane_mean7.974999999999763
in-drivable-lane_min3.7999999999998337
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 12.283007711678422, "get_ui_image": 0.027902038369349496, "step_physics": 0.1027148649357042, "survival_time": 59.99999999999873, "driven_lanedir": 11.38190251750487, "get_state_dump": 0.0045841224584650935, "get_robot_state": 0.003718989179295167, "sim_render-ego0": 0.003758571824860712, "get_duckie_state": 1.2635589142226857e-06, "in-drivable-lane": 3.7999999999998337, "deviation-heading": 15.55118747428988, "agent_compute-ego0": 0.012325172321087713, "complete-iteration": 0.16897876455226013, "set_robot_commands": 0.002262098406077821, "deviation-center-line": 3.8551709758486927, "driven_lanedir_consec": 6.949097787916864, "sim_compute_sim_state": 0.00961673031440881, "sim_compute_performance-ego0": 0.0020072414515715257}, "LF-norm-zigzag-000-ego0": {"driven_any": 10.211928171763825, "get_ui_image": 0.034839424066599164, "step_physics": 0.12914908398795782, "survival_time": 59.99999999999873, "driven_lanedir": 7.795405918859824, "get_state_dump": 0.004699996468625795, "get_robot_state": 0.0036178790560173657, "sim_render-ego0": 0.003746595906774567, "get_duckie_state": 1.3276798143474189e-06, "in-drivable-lane": 10.749999999999698, "deviation-heading": 22.983679004873395, "agent_compute-ego0": 0.012693672354870494, "complete-iteration": 0.2048968075713349, "set_robot_commands": 0.002207555937628067, "deviation-center-line": 4.3857215721510885, "driven_lanedir_consec": 3.819625133624512, "sim_compute_sim_state": 0.011865576935449706, "sim_compute_performance-ego0": 0.0019863547135352292}, "LF-norm-techtrack-000-ego0": {"driven_any": 7.717471880386884, "get_ui_image": 0.03455952729812557, "step_physics": 0.12171651221610404, "survival_time": 35.10000000000014, "driven_lanedir": 6.492056242050406, "get_state_dump": 0.005175560331276775, "get_robot_state": 0.0039960608882551344, "sim_render-ego0": 0.0040780439824501784, "get_duckie_state": 1.748289865246877e-06, "in-drivable-lane": 7.649999999999868, "deviation-heading": 6.519295439567559, "agent_compute-ego0": 0.013881672496985575, "complete-iteration": 0.2004958457322751, "set_robot_commands": 0.0025027449405719, "deviation-center-line": 1.5811930109378547, "driven_lanedir_consec": 5.4629385175293095, "sim_compute_sim_state": 0.012213131781153456, "sim_compute_performance-ego0": 0.00225923580261925}, "LF-norm-small_loop-000-ego0": {"driven_any": 12.219689179828904, "get_ui_image": 0.0253167227497251, "step_physics": 0.0916888125612354, "survival_time": 57.09999999999889, "driven_lanedir": 10.41689947518648, "get_state_dump": 0.004515982049656665, "get_robot_state": 0.003642322301656034, "sim_render-ego0": 0.0037637438986870874, "get_duckie_state": 1.2267188658760184e-06, "in-drivable-lane": 9.699999999999656, "deviation-heading": 11.698051221890134, "agent_compute-ego0": 0.012134206368019993, "complete-iteration": 0.15135242494698273, "set_robot_commands": 0.0022411100299131004, "deviation-center-line": 3.4797005170428825, "driven_lanedir_consec": 5.731492703196297, "sim_compute_sim_state": 0.005962932725173804, "sim_compute_performance-ego0": 0.0019997549182160946}}
set_robot_commands_max0.0025027449405719
set_robot_commands_mean0.002303377328547722
set_robot_commands_median0.0022516042179954606
set_robot_commands_min0.002207555937628067
sim_compute_performance-ego0_max0.00225923580261925
sim_compute_performance-ego0_mean0.002063146721485525
sim_compute_performance-ego0_median0.00200349818489381
sim_compute_performance-ego0_min0.0019863547135352292
sim_compute_sim_state_max0.012213131781153456
sim_compute_sim_state_mean0.009914592939046445
sim_compute_sim_state_median0.010741153624929256
sim_compute_sim_state_min0.005962932725173804
sim_render-ego0_max0.0040780439824501784
sim_render-ego0_mean0.003836738903193136
sim_render-ego0_median0.0037611578617739
sim_render-ego0_min0.003746595906774567
simulation-passed1
step_physics_max0.12914908398795782
step_physics_mean0.11131731842525038
step_physics_median0.11221568857590414
step_physics_min0.0916888125612354
survival_time_max59.99999999999873
survival_time_mean53.04999999999912
survival_time_min35.10000000000014
No reset possible
58217LFv-simsuccessyes0:28:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52320LFv-simerrorno0:08:58
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140471557027200
- M:video_aido:cmdline(in:/;out:/) 140471557029120
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41732LFv-simsuccessno0:09:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38183LFv-simerrorno0:00:42
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9314/LFv-sim-mont01-6ef51bb8a9d6-1-job38183-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38179LFv-simerrorno0:00:36
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9314/LFv-sim-mont04-e828c68b6a88-1-job38179-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36326LFv-simerrorno0:00:43
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-Sandy2-sandy-1-job36326-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36323LFv-simerrorno0:00:43
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-Sandy2-sandy-1-job36323-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35757LFv-simerrorno0:00:48
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1063, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-noname-sandy-1-job35757-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35365LFv-simerrorno0:21:02
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9314/LFv-sim-reg04-c054faef3177-1-job35365/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34998LFv-simsuccessno0:23:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34664LFv-simsuccessno0:23:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34663LFv-simsuccessno0:25:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34662LFv-simsuccessno0:24:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible