Duckietown Challenges Home Challenges Submissions

Submission 9335

Submission9335
Competingyes
Challengeaido5-LF-sim-validation
UserHimanshu Arora 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58172
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58172

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58172LFv-simsuccessyes0:05:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.48892942179856824
survival_time_median5.249999999999989
deviation-center-line_median0.13191648840869452
in-drivable-lane_median3.2499999999999902


other stats
agent_compute-ego0_max0.013743343807402112
agent_compute-ego0_mean0.013403877349525268
agent_compute-ego0_median0.013371812058524724
agent_compute-ego0_min0.013128541473649506
complete-iteration_max0.22110296299583035
complete-iteration_mean0.19127775043025053
complete-iteration_median0.1898707323240273
complete-iteration_min0.16426657407711714
deviation-center-line_max0.1643959468172477
deviation-center-line_mean0.12557847619610993
deviation-center-line_min0.07408498114980296
deviation-heading_max1.3085060007319516
deviation-heading_mean0.8127054713150081
deviation-heading_median0.7946842392500675
deviation-heading_min0.35294740602794605
driven_any_max3.0258559942245338
driven_any_mean1.7592592311392878
driven_any_median1.470999276751933
driven_any_min1.0691823768287514
driven_lanedir_consec_max0.6494315759471684
driven_lanedir_consec_mean0.4751876951392385
driven_lanedir_consec_min0.27346036101264914
driven_lanedir_max0.6494315759471684
driven_lanedir_mean0.4751876951392385
driven_lanedir_median0.48892942179856824
driven_lanedir_min0.27346036101264914
get_duckie_state_max1.6481448442508014e-06
get_duckie_state_mean1.4675302452174184e-06
get_duckie_state_median1.4610136086158909e-06
get_duckie_state_min1.2999489193870908e-06
get_robot_state_max0.00418083851154034
get_robot_state_mean0.003935392511878434
get_robot_state_median0.003895510792785864
get_robot_state_min0.0037697099504016697
get_state_dump_max0.005324725615672576
get_state_dump_mean0.005093521001168736
get_state_dump_median0.00503741501306294
get_state_dump_min0.004974528362876491
get_ui_image_max0.04098304698341771
get_ui_image_mean0.03319829090434767
get_ui_image_median0.03185545289036118
get_ui_image_min0.02809921085325062
in-drivable-lane_max8.100000000000005
in-drivable-lane_mean4.099999999999995
in-drivable-lane_min1.7999999999999936
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.0258559942245338, "get_ui_image": 0.03151760834913987, "step_physics": 0.10120695798824995, "survival_time": 9.700000000000005, "driven_lanedir": 0.27346036101264914, "get_state_dump": 0.005324725615672576, "get_robot_state": 0.00418083851154034, "sim_render-ego0": 0.004185291437002329, "get_duckie_state": 1.6481448442508014e-06, "in-drivable-lane": 8.100000000000005, "deviation-heading": 1.175100072434885, "agent_compute-ego0": 0.013540075986813276, "complete-iteration": 0.1755498849428617, "set_robot_commands": 0.0024870848044370995, "deviation-center-line": 0.1643959468172477, "driven_lanedir_consec": 0.27346036101264914, "sim_compute_sim_state": 0.010759213031866612, "sim_compute_performance-ego0": 0.002242935620821439}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.278825974691371, "get_ui_image": 0.04098304698341771, "step_physics": 0.13898598269412393, "survival_time": 4.699999999999991, "driven_lanedir": 0.3629620387619259, "get_state_dump": 0.004974528362876491, "get_robot_state": 0.003925960942318565, "sim_render-ego0": 0.003974312230160362, "get_duckie_state": 1.4731758519222862e-06, "in-drivable-lane": 2.8499999999999934, "deviation-heading": 1.3085060007319516, "agent_compute-ego0": 0.013203548130236172, "complete-iteration": 0.22110296299583035, "set_robot_commands": 0.0024167462399131375, "deviation-center-line": 0.1429278448490799, "driven_lanedir_consec": 0.3629620387619259, "sim_compute_sim_state": 0.0104601031855533, "sim_compute_performance-ego0": 0.002082649030183491}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0691823768287514, "get_ui_image": 0.032193297431582495, "step_physics": 0.13296330258959815, "survival_time": 4.149999999999993, "driven_lanedir": 0.6494315759471684, "get_state_dump": 0.004976000104631696, "get_robot_state": 0.0037697099504016697, "sim_render-ego0": 0.0039669473965962725, "get_duckie_state": 1.2999489193870908e-06, "in-drivable-lane": 1.7999999999999936, "deviation-heading": 0.41426840606525006, "agent_compute-ego0": 0.013743343807402112, "complete-iteration": 0.20419157970519292, "set_robot_commands": 0.0022067541167849584, "deviation-center-line": 0.07408498114980296, "driven_lanedir_consec": 0.6494315759471684, "sim_compute_sim_state": 0.008116631280808221, "sim_compute_performance-ego0": 0.002157696655818394}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6631725788124958, "get_ui_image": 0.02809921085325062, "step_physics": 0.10042745231563208, "survival_time": 5.799999999999987, "driven_lanedir": 0.6148968048352106, "get_state_dump": 0.005098829921494182, "get_robot_state": 0.003865060643253163, "sim_render-ego0": 0.003908799244807317, "get_duckie_state": 1.4488513653094951e-06, "in-drivable-lane": 3.649999999999987, "deviation-heading": 0.35294740602794605, "agent_compute-ego0": 0.013128541473649506, "complete-iteration": 0.16426657407711714, "set_robot_commands": 0.002378007285615318, "deviation-center-line": 0.12090513196830915, "driven_lanedir_consec": 0.6148968048352106, "sim_compute_sim_state": 0.005261812454614884, "sim_compute_performance-ego0": 0.0020034150180653627}}
set_robot_commands_max0.0024870848044370995
set_robot_commands_mean0.002372148111687628
set_robot_commands_median0.002397376762764228
set_robot_commands_min0.0022067541167849584
sim_compute_performance-ego0_max0.002242935620821439
sim_compute_performance-ego0_mean0.002121674081222172
sim_compute_performance-ego0_median0.0021201728430009425
sim_compute_performance-ego0_min0.0020034150180653627
sim_compute_sim_state_max0.010759213031866612
sim_compute_sim_state_mean0.008649439988210755
sim_compute_sim_state_median0.009288367233180762
sim_compute_sim_state_min0.005261812454614884
sim_render-ego0_max0.004185291437002329
sim_render-ego0_mean0.00400883757714157
sim_render-ego0_median0.003970629813378318
sim_render-ego0_min0.003908799244807317
simulation-passed1
step_physics_max0.13898598269412393
step_physics_mean0.11839592389690104
step_physics_median0.11708513028892406
step_physics_min0.10042745231563208
survival_time_max9.700000000000005
survival_time_mean6.087499999999993
survival_time_min4.149999999999993
No reset possible
52273LFv-simerrorno0:01:42
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139941224527376
- M:video_aido:cmdline(in:/;out:/) 139946205110480
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52267LFv-simerrorno0:03:04
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140428209494240
- M:video_aido:cmdline(in:/;out:/) 140428209493472
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41702LFv-simsuccessno0:04:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41701LFv-simsuccessno0:04:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41700LFv-simsuccessno0:04:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38154LFv-simerrorno0:00:37
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9335/LFv-sim-mont04-e828c68b6a88-1-job38154-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38150LFv-simerrorno0:00:43
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9335/LFv-sim-mont02-80325a328f54-1-job38150-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36299LFv-simsuccessno0:11:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36297LFv-simsuccessno0:07:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35730LFv-simsuccessno0:00:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35346LFv-simerrorno0:09:24
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9335/LFv-sim-reg01-94a6fab21ac9-1-job35346/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34981LFv-simsuccessno0:10:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34702LFv-simsuccessno0:12:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34701LFv-simsuccessno0:10:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible