Duckietown Challenges Home Challenges Submissions

Submission 9379

Submission9379
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58128
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58128

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58128LFv-simsuccessyes0:13:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6768008873145092
survival_time_median22.72499999999999
deviation-center-line_median0.15420773127724277
in-drivable-lane_median19.699999999999996


other stats
agent_compute-ego0_max0.014099253714084626
agent_compute-ego0_mean0.01353224396731143
agent_compute-ego0_median0.013594649827811216
agent_compute-ego0_min0.012840422499538665
complete-iteration_max0.256226502932035
complete-iteration_mean0.1990592380528237
complete-iteration_median0.1851261978758159
complete-iteration_min0.1697580535276279
deviation-center-line_max0.5391354483191219
deviation-center-line_mean0.24367457857832192
deviation-center-line_min0.1271474034396802
deviation-heading_max1.608962849471553
deviation-heading_mean1.0680569247036509
deviation-heading_median0.9973409202206068
deviation-heading_min0.6685830089018354
driven_any_max9.7506197127936
driven_any_mean4.91099325322872
driven_any_median4.797319602806865
driven_any_min0.2987140945075537
driven_lanedir_consec_max1.9357240115194547
driven_lanedir_consec_mean0.8587348052782294
driven_lanedir_consec_min0.14561343496444443
driven_lanedir_max1.9357240115194547
driven_lanedir_mean0.8587348052782294
driven_lanedir_median0.6768008873145092
driven_lanedir_min0.14561343496444443
get_duckie_state_max1.633167266845703e-06
get_duckie_state_mean1.4882962827758075e-06
get_duckie_state_median1.5004688547872248e-06
get_duckie_state_min1.3190801546830787e-06
get_robot_state_max0.004152567684650421
get_robot_state_mean0.003910287485225143
get_robot_state_median0.0038834389025566942
get_robot_state_min0.003721704451136765
get_state_dump_max0.005488821176382211
get_state_dump_mean0.0050071893841866395
get_state_dump_median0.004964465038793858
get_state_dump_min0.004611006282776628
get_ui_image_max0.03750249422513521
get_ui_image_mean0.03217217260712708
get_ui_image_median0.03126533066682746
get_ui_image_min0.02865553486971818
in-drivable-lane_max36.449999999999605
in-drivable-lane_mean19.2624999999999
in-drivable-lane_min1.1999999999999955
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 9.7506197127936, "get_ui_image": 0.02865553486971818, "step_physics": 0.102537022519404, "survival_time": 44.79999999999959, "driven_lanedir": 1.9357240115194547, "get_state_dump": 0.004611006282776628, "get_robot_state": 0.003721704451136765, "sim_render-ego0": 0.003989527456736485, "get_duckie_state": 1.3943632842439209e-06, "in-drivable-lane": 36.449999999999605, "deviation-heading": 1.608962849471553, "agent_compute-ego0": 0.012840422499538665, "complete-iteration": 0.1697580535276279, "set_robot_commands": 0.0022795383746807393, "deviation-center-line": 0.5391354483191219, "driven_lanedir_consec": 1.9357240115194547, "sim_compute_sim_state": 0.00896823765043431, "sim_compute_performance-ego0": 0.0020672166626588954}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.2987140945075537, "get_ui_image": 0.03750249422513521, "step_physics": 0.17636758364163913, "survival_time": 3.1999999999999966, "driven_lanedir": 0.14561343496444443, "get_state_dump": 0.005488821176382211, "get_robot_state": 0.0039950187389667215, "sim_render-ego0": 0.00415799434368427, "get_duckie_state": 1.6065744253305289e-06, "in-drivable-lane": 1.1999999999999955, "deviation-heading": 1.3240486981883426, "agent_compute-ego0": 0.01385504282437838, "complete-iteration": 0.256226502932035, "set_robot_commands": 0.002635548664973332, "deviation-center-line": 0.1271474034396802, "driven_lanedir_consec": 0.14561343496444443, "sim_compute_sim_state": 0.009929979764498198, "sim_compute_performance-ego0": 0.0021994700798621545}, "LF-norm-techtrack-000-ego0": {"driven_any": 7.960135566562005, "get_ui_image": 0.03345697443590025, "step_physics": 0.1133318534703769, "survival_time": 37.50000000000001, "driven_lanedir": 0.6635707161857509, "get_state_dump": 0.004741181705032938, "get_robot_state": 0.003771859066146668, "sim_render-ego0": 0.004132500977395537, "get_duckie_state": 1.3190801546830787e-06, "in-drivable-lane": 34.55000000000001, "deviation-heading": 0.6706331422528712, "agent_compute-ego0": 0.01333425683124405, "complete-iteration": 0.1913845520044929, "set_robot_commands": 0.002276994575673191, "deviation-center-line": 0.1540455752642049, "driven_lanedir_consec": 0.6635707161857509, "sim_compute_sim_state": 0.014134468314809584, "sim_compute_performance-ego0": 0.0021144477092473707}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.6345036390517245, "get_ui_image": 0.02907368689775467, "step_physics": 0.11133110970258712, "survival_time": 7.94999999999998, "driven_lanedir": 0.6900310584432674, "get_state_dump": 0.005187748372554779, "get_robot_state": 0.004152567684650421, "sim_render-ego0": 0.004345628619194031, "get_duckie_state": 1.633167266845703e-06, "in-drivable-lane": 4.849999999999983, "deviation-heading": 0.6685830089018354, "agent_compute-ego0": 0.014099253714084626, "complete-iteration": 0.178867843747139, "set_robot_commands": 0.0025910839438438417, "deviation-center-line": 0.15436988729028064, "driven_lanedir_consec": 0.6900310584432674, "sim_compute_sim_state": 0.005781680345535278, "sim_compute_performance-ego0": 0.0021973133087158205}}
set_robot_commands_max0.002635548664973332
set_robot_commands_mean0.0024457913897927763
set_robot_commands_median0.0024353111592622908
set_robot_commands_min0.002276994575673191
sim_compute_performance-ego0_max0.0021994700798621545
sim_compute_performance-ego0_mean0.0021446119401210603
sim_compute_performance-ego0_median0.0021558805089815954
sim_compute_performance-ego0_min0.0020672166626588954
sim_compute_sim_state_max0.014134468314809584
sim_compute_sim_state_mean0.009703591518819344
sim_compute_sim_state_median0.009449108707466252
sim_compute_sim_state_min0.005781680345535278
sim_render-ego0_max0.004345628619194031
sim_render-ego0_mean0.0041564128492525805
sim_render-ego0_median0.0041452476605399035
sim_render-ego0_min0.003989527456736485
simulation-passed1
step_physics_max0.17636758364163913
step_physics_mean0.1258918923335018
step_physics_median0.112331481586482
step_physics_min0.102537022519404
survival_time_max44.79999999999959
survival_time_mean23.362499999999898
survival_time_min3.1999999999999966
No reset possible
58126LFv-simsuccessyes0:12:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58120LFv-simsuccessyes0:08:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52180LFv-simerrorno0:05:58
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140408816889088
- M:video_aido:cmdline(in:/;out:/) 140408816801680
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52175LFv-simerrorno0:06:20
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139987403331040
- M:video_aido:cmdline(in:/;out:/) 139987403330848
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41673LFv-simsuccessno0:07:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41672LFv-simsuccessno0:08:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38082LFv-simsuccessno0:08:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36247LFv-simsuccessno0:11:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35689LFv-simsuccessno0:01:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35309LFv-simerrorno0:14:51
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg02-1b92df2e7e91-1-job35309/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35308LFv-simerrorno0:13:52
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9379/LFv-sim-reg03-0c28c9d61367-1-job35308/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34896LFv-simsuccessno0:17:17
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34892LFv-simsuccessno0:16:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34848LFv-simsuccessno0:15:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34847LFv-simsuccessno0:15:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible