Duckietown Challenges Home Challenges Submissions

Submission 9241

Submission9241
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58482
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58482

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58482LFv-simsuccessyes0:27:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median6.69570370608122
survival_time_median59.94999999999873
deviation-center-line_median3.715464046170734
in-drivable-lane_median7.274999999999694


other stats
agent_compute-ego0_max0.035390714009602864
agent_compute-ego0_mean0.025403429022282462
agent_compute-ego0_median0.026880256780231752
agent_compute-ego0_min0.0124624885190635
complete-iteration_max0.2259420945909288
complete-iteration_mean0.18975817564614408
complete-iteration_median0.1797395768908041
complete-iteration_min0.1736114542120392
deviation-center-line_max4.468689000396321
deviation-center-line_mean3.0403650772193096
deviation-center-line_min0.26184321613945133
deviation-heading_max12.753585490653633
deviation-heading_mean8.427809418337654
deviation-heading_median9.698274176490091
deviation-heading_min1.5611038297167972
driven_any_max8.338136424959512
driven_any_mean6.578901382603088
driven_any_median8.278750153329813
driven_any_min1.4199687987932146
driven_lanedir_consec_max7.23497196951629
driven_lanedir_consec_mean5.327474139681264
driven_lanedir_consec_min0.6835171770463266
driven_lanedir_max7.23497196951629
driven_lanedir_mean5.327474139681264
driven_lanedir_median6.69570370608122
driven_lanedir_min0.6835171770463266
get_duckie_state_max1.6506360509775398e-06
get_duckie_state_mean1.5832412983414164e-06
get_duckie_state_median1.5838232895350608e-06
get_duckie_state_min1.514682563318003e-06
get_robot_state_max0.003770989839679295
get_robot_state_mean0.003720705797761879
get_robot_state_median0.003710637935754911
get_robot_state_min0.0036905574798583984
get_state_dump_max0.004743761486477322
get_state_dump_mean0.004712788414197307
get_state_dump_median0.004724486918653776
get_state_dump_min0.004658418333004357
get_ui_image_max0.03553486400180393
get_ui_image_mean0.030600181591048586
get_ui_image_median0.03038171393766077
get_ui_image_min0.026102434487068883
in-drivable-lane_max11.699999999999758
in-drivable-lane_mean7.9874999999997955
in-drivable-lane_min5.700000000000036
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 8.226358143405973, "get_ui_image": 0.028125476598540776, "step_physics": 0.10019893065604496, "survival_time": 59.899999999998734, "driven_lanedir": 7.23497196951629, "get_state_dump": 0.004734442570092183, "get_robot_state": 0.0036909073168680607, "sim_render-ego0": 0.0037608357446207615, "get_duckie_state": 1.6506360509775398e-06, "in-drivable-lane": 6.699999999999619, "deviation-heading": 7.772446783057263, "agent_compute-ego0": 0.019496133269818253, "complete-iteration": 0.1736114542120392, "set_robot_commands": 0.002203090674087741, "deviation-center-line": 3.515368096394195, "driven_lanedir_consec": 7.23497196951629, "sim_compute_sim_state": 0.009344190632531404, "sim_compute_performance-ego0": 0.0019591936377110933}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.4199687987932146, "get_ui_image": 0.03553486400180393, "step_physics": 0.12838572608100043, "survival_time": 11.200000000000024, "driven_lanedir": 0.6835171770463266, "get_state_dump": 0.004743761486477322, "get_robot_state": 0.0036905574798583984, "sim_render-ego0": 0.003767349455091688, "get_duckie_state": 1.5672047932942708e-06, "in-drivable-lane": 5.700000000000036, "deviation-heading": 1.5611038297167972, "agent_compute-ego0": 0.035390714009602864, "complete-iteration": 0.2259420945909288, "set_robot_commands": 0.0022736358642578124, "deviation-center-line": 0.26184321613945133, "driven_lanedir_consec": 0.6835171770463266, "sim_compute_sim_state": 0.01006781578063965, "sim_compute_performance-ego0": 0.0019890965355767146}, "LF-norm-techtrack-000-ego0": {"driven_any": 8.331142163253654, "get_ui_image": 0.03263795127678076, "step_physics": 0.10933593012311874, "survival_time": 59.99999999999873, "driven_lanedir": 6.922243088180358, "get_state_dump": 0.004658418333004357, "get_robot_state": 0.0037303685546417617, "sim_render-ego0": 0.0038231927885997302, "get_duckie_state": 1.514682563318003e-06, "in-drivable-lane": 7.849999999999769, "deviation-heading": 12.753585490653633, "agent_compute-ego0": 0.0124624885190635, "complete-iteration": 0.1843046836313062, "set_robot_commands": 0.0022619923981500607, "deviation-center-line": 3.915559995947272, "driven_lanedir_consec": 6.922243088180358, "sim_compute_sim_state": 0.01323065928475843, "sim_compute_performance-ego0": 0.0020687282333564597}, "LF-norm-small_loop-000-ego0": {"driven_any": 8.338136424959512, "get_ui_image": 0.026102434487068883, "step_physics": 0.09182187599703036, "survival_time": 59.99999999999873, "driven_lanedir": 6.469164323982081, "get_state_dump": 0.00471453126721537, "get_robot_state": 0.003770989839679295, "sim_render-ego0": 0.003828051882322186, "get_duckie_state": 1.6004417857758509e-06, "in-drivable-lane": 11.699999999999758, "deviation-heading": 11.624101569922924, "agent_compute-ego0": 0.03426438029064525, "complete-iteration": 0.175174470150302, "set_robot_commands": 0.002324725864928132, "deviation-center-line": 4.468689000396321, "driven_lanedir_consec": 6.469164323982081, "sim_compute_sim_state": 0.006236946850791759, "sim_compute_performance-ego0": 0.0020123548452105748}}
set_robot_commands_max0.002324725864928132
set_robot_commands_mean0.0022658612003559364
set_robot_commands_median0.0022678141312039365
set_robot_commands_min0.002203090674087741
sim_compute_performance-ego0_max0.0020687282333564597
sim_compute_performance-ego0_mean0.0020073433129637105
sim_compute_performance-ego0_median0.0020007256903936447
sim_compute_performance-ego0_min0.0019591936377110933
sim_compute_sim_state_max0.01323065928475843
sim_compute_sim_state_mean0.009719903137180307
sim_compute_sim_state_median0.009706003206585526
sim_compute_sim_state_min0.006236946850791759
sim_render-ego0_max0.003828051882322186
sim_render-ego0_mean0.003794857467658591
sim_render-ego0_median0.0037952711218457095
sim_render-ego0_min0.0037608357446207615
simulation-passed1
step_physics_max0.12838572608100043
step_physics_mean0.10743561571429865
step_physics_median0.10476743038958183
step_physics_min0.09182187599703036
survival_time_max59.99999999999873
survival_time_mean47.774999999999054
survival_time_min11.200000000000024
No reset possible
58479LFv-simsuccessyes0:32:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58477LFv-simsuccessyes0:26:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58475LFv-simsuccessyes0:32:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58474LFv-simsuccessyes0:26:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58473LFv-simsuccessyes0:25:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58472LFv-simsuccessyes0:28:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52406LFv-simerrorno0:06:17
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139965173519648
- M:video_aido:cmdline(in:/;out:/) 139965173491648
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41776LFv-simsuccessno0:09:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38314LFv-simsuccessno0:10:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38312LFv-simsuccessno0:08:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36408LFv-simerrorno0:00:49
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-Sandy1-sandy-1-job36408-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35832LFv-simsuccessno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35420LFv-simerrorno0:21:42
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9241/LFv-sim-reg02-1b92df2e7e91-1-job35420/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35065LFv-simsuccessno0:24:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34522LFv-simsuccessno0:25:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34521LFv-simsuccessno0:26:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible