Duckietown Challenges Home Challenges Submissions

Submission 9354

Submission9354
Competingyes
Challengeaido5-LF-sim-validation
UserHimanshu Arora 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58161
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58161

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58161LFv-simsuccessyes0:20:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.4803787475750393
survival_time_median21.900000000000176
deviation-center-line_median0.6329685520423263
in-drivable-lane_median13.375000000000156


other stats
agent_compute-ego0_max0.013480470814836134
agent_compute-ego0_mean0.012819731923010674
agent_compute-ego0_median0.012680920460456244
agent_compute-ego0_min0.012436615956294072
complete-iteration_max0.3350163191465792
complete-iteration_mean0.29532191258131535
complete-iteration_median0.3026957896655035
complete-iteration_min0.24087975184767532
deviation-center-line_max0.6959443550502984
deviation-center-line_mean0.5719886646092519
deviation-center-line_min0.3260731993020567
deviation-heading_max5.878141564086586
deviation-heading_mean3.6695622428442936
deviation-heading_median3.586681669102713
deviation-heading_min1.626744069085163
driven_any_max3.0230607461928347
driven_any_mean1.750349376797719
driven_any_median1.4577218484880516
driven_any_min1.0628930640219365
driven_lanedir_consec_max0.6406760173617714
driven_lanedir_consec_mean0.46708330866502346
driven_lanedir_consec_min0.26689972214824387
driven_lanedir_max0.6406760173617714
driven_lanedir_mean0.46708330866502346
driven_lanedir_median0.4803787475750393
driven_lanedir_min0.26689972214824387
get_duckie_state_max1.5775282049275798e-06
get_duckie_state_mean1.4914259550962094e-06
get_duckie_state_median1.4800416348049526e-06
get_duckie_state_min1.4280923458473525e-06
get_robot_state_max0.0038804898568249625
get_robot_state_mean0.0038060697410236383
get_robot_state_median0.003815286658241919
get_robot_state_min0.003713215790785752
get_state_dump_max0.005124766770183301
get_state_dump_mean0.005055939168654657
get_state_dump_median0.005052477723459483
get_state_dump_min0.00499403445751636
get_ui_image_max0.0351069890059434
get_ui_image_mean0.031195591966487496
get_ui_image_median0.03134911938691533
get_ui_image_min0.026977140086175945
in-drivable-lane_max37.7999999999997
in-drivable-lane_mean17.700000000000024
in-drivable-lane_min6.250000000000089
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.0230607461928347, "get_ui_image": 0.028350352434590543, "step_physics": 0.20616023059358995, "survival_time": 44.29999999999962, "driven_lanedir": 0.26689972214824387, "get_state_dump": 0.005124766770183301, "get_robot_state": 0.0038226059351322214, "sim_render-ego0": 0.003920365078742233, "get_duckie_state": 1.4280923458473525e-06, "in-drivable-lane": 37.7999999999997, "deviation-heading": 5.227393545584871, "agent_compute-ego0": 0.01277347051841968, "complete-iteration": 0.27486400620375573, "set_robot_commands": 0.002322761423827024, "deviation-center-line": 0.6959443550502984, "driven_lanedir_consec": 0.26689972214824387, "sim_compute_sim_state": 0.010153836032046002, "sim_compute_performance-ego0": 0.0021368962667437257}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.269042606354128, "get_ui_image": 0.0351069890059434, "step_physics": 0.2554829993805328, "survival_time": 19.200000000000134, "driven_lanedir": 0.3522920386226194, "get_state_dump": 0.00499403445751636, "get_robot_state": 0.003713215790785752, "sim_render-ego0": 0.003913043381331803, "get_duckie_state": 1.4639520025872565e-06, "in-drivable-lane": 11.450000000000095, "deviation-heading": 5.878141564086586, "agent_compute-ego0": 0.012436615956294072, "complete-iteration": 0.3305275731272512, "set_robot_commands": 0.0022197698617910408, "deviation-center-line": 0.6742705330320855, "driven_lanedir_consec": 0.3522920386226194, "sim_compute_sim_state": 0.010469037216979189, "sim_compute_performance-ego0": 0.002092335440895774}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0628930640219365, "get_ui_image": 0.034347886339240116, "step_physics": 0.26008673326684795, "survival_time": 16.300000000000097, "driven_lanedir": 0.6406760173617714, "get_state_dump": 0.005088542944065293, "get_robot_state": 0.0038804898568249625, "sim_render-ego0": 0.004115116341033842, "get_duckie_state": 1.496131267022649e-06, "in-drivable-lane": 6.250000000000089, "deviation-heading": 1.9459697926205544, "agent_compute-ego0": 0.013480470814836134, "complete-iteration": 0.3350163191465792, "set_robot_commands": 0.002432061262451545, "deviation-center-line": 0.3260731993020567, "driven_lanedir_consec": 0.6406760173617714, "sim_compute_sim_state": 0.00929664180183994, "sim_compute_performance-ego0": 0.00218605411891179}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6464010906219757, "get_ui_image": 0.026977140086175945, "step_physics": 0.1787108209505526, "survival_time": 24.600000000000215, "driven_lanedir": 0.6084654565274592, "get_state_dump": 0.005016412502853672, "get_robot_state": 0.003807967381351618, "sim_render-ego0": 0.003919342710328634, "get_duckie_state": 1.5775282049275798e-06, "in-drivable-lane": 15.300000000000216, "deviation-heading": 1.626744069085163, "agent_compute-ego0": 0.01258837040249281, "complete-iteration": 0.24087975184767532, "set_robot_commands": 0.0023208902525369826, "deviation-center-line": 0.5916665710525671, "driven_lanedir_consec": 0.6084654565274592, "sim_compute_sim_state": 0.005341262411152374, "sim_compute_performance-ego0": 0.0020947543409243073}}
set_robot_commands_max0.002432061262451545
set_robot_commands_mean0.002323870700151648
set_robot_commands_median0.002321825838182003
set_robot_commands_min0.0022197698617910408
sim_compute_performance-ego0_max0.00218605411891179
sim_compute_performance-ego0_mean0.002127510041868899
sim_compute_performance-ego0_median0.0021158253038340165
sim_compute_performance-ego0_min0.002092335440895774
sim_compute_sim_state_max0.010469037216979189
sim_compute_sim_state_mean0.008815194365504375
sim_compute_sim_state_median0.00972523891694297
sim_compute_sim_state_min0.005341262411152374
sim_render-ego0_max0.004115116341033842
sim_render-ego0_mean0.003966966877859128
sim_render-ego0_median0.003919853894535433
sim_render-ego0_min0.003913043381331803
simulation-passed1
step_physics_max0.26008673326684795
step_physics_mean0.2251101960478808
step_physics_median0.2308216149870614
step_physics_min0.1787108209505526
survival_time_max44.29999999999962
survival_time_mean26.100000000000016
survival_time_min16.300000000000097
No reset possible
58148LFv-simtimeoutyes----No reset possible
58147LFv-simtimeoutyes----No reset possible
58145LFv-simtimeoutyes----No reset possible
52332LFv-simerrorno0:07:35
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140695509299264
- M:video_aido:cmdline(in:/;out:/) 140695512103520
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52330LFv-simtimeoutno0:08:49
Timeout because eval [...]
Timeout because evaluator contacted us
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52315LFv-simerrorno0:06:10
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139716754611696
- M:video_aido:cmdline(in:/;out:/) 139716754721280
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52208LFv-simtimeoutno0:09:28
Timeout because eval [...]
Timeout because evaluator contacted us
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41693LFv-simsuccessno0:08:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38145LFv-simerrorno0:00:43
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9354/LFv-sim-mont04-e828c68b6a88-1-job38145-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36281LFv-simerrorno0:00:55
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-Sandy2-sandy-1-job36281-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36280LFv-simerrorno0:00:58
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-Sandy1-sandy-1-job36280-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35717LFv-simsuccessno0:00:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35336LFv-simerrorno0:19:55
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9354/LFv-sim-reg01-94a6fab21ac9-1-job35336/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34972LFv-simsuccessno0:20:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34739LFv-simsuccessno0:23:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible