Duckietown Challenges Home Challenges Submissions

Submission 9235

Submission9235
Competingyes
Challengeaido5-LF-sim-validation
UserMelisande Teng
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58499
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58499

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58499LFv-simsuccessyes0:06:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6778635702532005
survival_time_median10.750000000000018
deviation-center-line_median0.17664931249331806
in-drivable-lane_median3.7500000000000138


other stats
agent_compute-ego0_max0.014259788225281914
agent_compute-ego0_mean0.013257992448530737
agent_compute-ego0_median0.013024951536328658
agent_compute-ego0_min0.012722278496183724
complete-iteration_max0.24160154360645225
complete-iteration_mean0.18561879836909892
complete-iteration_median0.1745547490350994
complete-iteration_min0.15176415179974467
deviation-center-line_max0.6250045350645256
deviation-center-line_mean0.2652107596996283
deviation-center-line_min0.08253987874735146
deviation-heading_max1.4849586391972889
deviation-heading_mean0.868306741341664
deviation-heading_median0.7155598883828138
deviation-heading_min0.5571485494037395
driven_any_max3.2410462322228346
driven_any_mean1.9098374470111317
driven_any_median2.035628768382392
driven_any_min0.32704601905690867
driven_lanedir_consec_max2.0670280178575116
driven_lanedir_consec_mean0.8983705371988555
driven_lanedir_consec_min0.17072699043150896
driven_lanedir_max2.0670280178575116
driven_lanedir_mean0.8983705371988555
driven_lanedir_median0.6778635702532005
driven_lanedir_min0.17072699043150896
get_duckie_state_max2.55136375883186e-06
get_duckie_state_mean2.4734444767231128e-06
get_duckie_state_median2.516815911663544e-06
get_duckie_state_min2.3087823247335043e-06
get_robot_state_max0.003760739626637493
get_robot_state_mean0.0037269954436646856
get_robot_state_median0.0037286031550100352
get_robot_state_min0.0036900358380011792
get_state_dump_max0.004838832346091708
get_state_dump_mean0.004777126164211369
get_state_dump_median0.004779316824650404
get_state_dump_min0.004711038661452959
get_ui_image_max0.034660456315526424
get_ui_image_mean0.02971121726922053
get_ui_image_median0.029154876629984547
get_ui_image_min0.02587465950138661
in-drivable-lane_max12.750000000000108
in-drivable-lane_mean5.375000000000034
in-drivable-lane_min1.2499999999999982
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.402504769291981, "get_ui_image": 0.0273239907040539, "step_physics": 0.1009700013346881, "survival_time": 12.500000000000044, "driven_lanedir": 2.0670280178575116, "get_state_dump": 0.004838832346091708, "get_robot_state": 0.003760739626637493, "sim_render-ego0": 0.0038677050297953697, "get_duckie_state": 2.55136375883186e-06, "in-drivable-lane": 2.10000000000003, "deviation-heading": 1.4849586391972889, "agent_compute-ego0": 0.012722278496183724, "complete-iteration": 0.16749817916596554, "set_robot_commands": 0.0022798068969848147, "deviation-center-line": 0.6250045350645256, "driven_lanedir_consec": 2.0670280178575116, "sim_compute_sim_state": 0.00957005812352397, "sim_compute_performance-ego0": 0.0020639402457917353}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.32704601905690867, "get_ui_image": 0.034660456315526424, "step_physics": 0.16640197555973846, "survival_time": 2.5999999999999988, "driven_lanedir": 0.17072699043150896, "get_state_dump": 0.004711038661452959, "get_robot_state": 0.0036900358380011792, "sim_render-ego0": 0.003957258080536465, "get_duckie_state": 2.496647384931456e-06, "in-drivable-lane": 1.2499999999999982, "deviation-heading": 0.7828311365573363, "agent_compute-ego0": 0.014259788225281914, "complete-iteration": 0.24160154360645225, "set_robot_commands": 0.0022669558255177624, "deviation-center-line": 0.08253987874735146, "driven_lanedir_consec": 0.17072699043150896, "sim_compute_sim_state": 0.009481110662784216, "sim_compute_performance-ego0": 0.002075213306355026}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.2410462322228346, "get_ui_image": 0.03098576255591519, "step_physics": 0.11030565471534268, "survival_time": 16.5500000000001, "driven_lanedir": 0.693313505152086, "get_state_dump": 0.004815730703882424, "get_robot_state": 0.0037514215492340454, "sim_render-ego0": 0.003926009298807167, "get_duckie_state": 2.3087823247335043e-06, "in-drivable-lane": 12.750000000000108, "deviation-heading": 0.6482886402082912, "agent_compute-ego0": 0.013302230691335288, "complete-iteration": 0.1816113189042333, "set_robot_commands": 0.0023110271936439605, "deviation-center-line": 0.17667942416072732, "driven_lanedir_consec": 0.693313505152086, "sim_compute_sim_state": 0.010069385350468647, "sim_compute_performance-ego0": 0.0020448809646698364}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6687527674728024, "get_ui_image": 0.02587465950138661, "step_physics": 0.0913402336078454, "survival_time": 8.999999999999993, "driven_lanedir": 0.6624136353543151, "get_state_dump": 0.004742902945418384, "get_robot_state": 0.003705784760786025, "sim_render-ego0": 0.003746841493891089, "get_duckie_state": 2.536984438395632e-06, "in-drivable-lane": 5.399999999999998, "deviation-heading": 0.5571485494037395, "agent_compute-ego0": 0.012747672381322028, "complete-iteration": 0.15176415179974467, "set_robot_commands": 0.002216210022815683, "deviation-center-line": 0.17661920082590882, "driven_lanedir_consec": 0.6624136353543151, "sim_compute_sim_state": 0.005224245029259782, "sim_compute_performance-ego0": 0.0020690201395782977}}
set_robot_commands_max0.0023110271936439605
set_robot_commands_mean0.0022684999847405554
set_robot_commands_median0.0022733813612512885
set_robot_commands_min0.002216210022815683
sim_compute_performance-ego0_max0.002075213306355026
sim_compute_performance-ego0_mean0.002063263664098724
sim_compute_performance-ego0_median0.0020664801926850163
sim_compute_performance-ego0_min0.0020448809646698364
sim_compute_sim_state_max0.010069385350468647
sim_compute_sim_state_mean0.008586199791509154
sim_compute_sim_state_median0.009525584393154093
sim_compute_sim_state_min0.005224245029259782
sim_render-ego0_max0.003957258080536465
sim_render-ego0_mean0.0038744534757575223
sim_render-ego0_median0.003896857164301268
sim_render-ego0_min0.003746841493891089
simulation-passed1
step_physics_max0.16640197555973846
step_physics_mean0.11725446630440368
step_physics_median0.1056378280250154
step_physics_min0.0913402336078454
survival_time_max16.5500000000001
survival_time_mean10.162500000000032
survival_time_min2.5999999999999988
No reset possible
52434LFv-simerrorno0:03:48
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139876575622432
- M:video_aido:cmdline(in:/;out:/) 139876575392096
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41784LFv-simsuccessno0:07:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41783LFv-simsuccessno0:07:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41782LFv-simsuccessno0:07:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38331LFv-simsuccessno0:07:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36421LFv-simerrorno0:00:44
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-Sandy2-sandy-1-job36421-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36417LFv-simerrorno0:00:46
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-Sandy2-sandy-1-job36417-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35844LFv-simsuccessno0:01:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35426LFv-simerrorno0:21:58
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9235/LFv-sim-reg04-c054faef3177-1-job35426/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35071LFv-simsuccessno0:23:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34479LFv-simsuccessno0:25:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34478LFv-simsuccessno0:26:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible