Duckietown Challenges Home Challenges Submissions

Submission 9374

Submission9374
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58122
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58122

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58122LFv-simsuccessyes0:10:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.46113229971088454
survival_time_median10.600000000000048
deviation-center-line_median0.1634829568198553
in-drivable-lane_median8.250000000000043


other stats
agent_compute-ego0_max0.013652091497903342
agent_compute-ego0_mean0.013159345543633672
agent_compute-ego0_median0.01309910952502677
agent_compute-ego0_min0.012787071626577804
complete-iteration_max0.24739942003469
complete-iteration_mean0.19046659201003033
complete-iteration_median0.18212845275025497
complete-iteration_min0.15021004250492137
deviation-center-line_max0.27091689859837204
deviation-center-line_mean0.16945319315548615
deviation-center-line_min0.07992996038386191
deviation-heading_max1.483408358622085
deviation-heading_mean0.9143120567470934
deviation-heading_median0.8565777124474154
deviation-heading_min0.4606844434714577
driven_any_max12.159280072220424
driven_any_mean4.200946966245074
driven_any_median2.1600308013332667
driven_any_min0.32444619009334047
driven_lanedir_consec_max0.8893886666380376
driven_lanedir_consec_mean0.49543757604369254
driven_lanedir_consec_min0.17009703811496335
driven_lanedir_max0.8893886666380376
driven_lanedir_mean0.49543757604369254
driven_lanedir_median0.46113229971088454
driven_lanedir_min0.17009703811496335
get_duckie_state_max1.9021086640410371e-06
get_duckie_state_mean1.786821862084891e-06
get_duckie_state_median1.8514654206371944e-06
get_duckie_state_min1.542247943024137e-06
get_robot_state_max0.0035965704655909277
get_robot_state_mean0.0035501811934654133
get_robot_state_median0.0035442580096831586
get_robot_state_min0.003515638288904409
get_state_dump_max0.004689965929303851
get_state_dump_mean0.004494833794098474
get_state_dump_median0.004489371060554009
get_state_dump_min0.004310627125982029
get_ui_image_max0.03632886292504483
get_ui_image_mean0.03101098532493652
get_ui_image_median0.030866505627712697
get_ui_image_min0.025982067119275865
in-drivable-lane_max48.1499999999992
in-drivable-lane_mean16.53749999999982
in-drivable-lane_min1.499999999999997
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.260473346139455, "get_ui_image": 0.02833801169893635, "step_physics": 0.10138071686474244, "survival_time": 16.700000000000102, "driven_lanedir": 0.28020879076193855, "get_state_dump": 0.004310627125982029, "get_robot_state": 0.0035460237246840746, "sim_render-ego0": 0.003660726547241211, "get_duckie_state": 1.542247943024137e-06, "in-drivable-lane": 14.50000000000009, "deviation-heading": 1.483408358622085, "agent_compute-ego0": 0.012787071626577804, "complete-iteration": 0.16786790107613178, "set_robot_commands": 0.002090239880689934, "deviation-center-line": 0.22820725077036297, "driven_lanedir_consec": 0.28020879076193855, "sim_compute_sim_state": 0.009812878138983428, "sim_compute_performance-ego0": 0.0018718335165906308}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.32444619009334047, "get_ui_image": 0.03632886292504483, "step_physics": 0.17236258944527047, "survival_time": 2.9999999999999973, "driven_lanedir": 0.17009703811496335, "get_state_dump": 0.0044799554543417, "get_robot_state": 0.003515638288904409, "sim_render-ego0": 0.003732751627437404, "get_duckie_state": 1.8291786068775616e-06, "in-drivable-lane": 1.499999999999997, "deviation-heading": 0.8938852598550348, "agent_compute-ego0": 0.013209417218067608, "complete-iteration": 0.24739942003469, "set_robot_commands": 0.002280219656522157, "deviation-center-line": 0.09875866286934765, "driven_lanedir_consec": 0.17009703811496335, "sim_compute_sim_state": 0.00952976258074651, "sim_compute_performance-ego0": 0.0018833230753413969}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0595882565270782, "get_ui_image": 0.033394999556489044, "step_physics": 0.1249861350426307, "survival_time": 4.499999999999992, "driven_lanedir": 0.6420558086598305, "get_state_dump": 0.004689965929303851, "get_robot_state": 0.0035965704655909277, "sim_render-ego0": 0.0038047334650060634, "get_duckie_state": 1.9021086640410371e-06, "in-drivable-lane": 1.999999999999993, "deviation-heading": 0.4606844434714577, "agent_compute-ego0": 0.013652091497903342, "complete-iteration": 0.19638900442437812, "set_robot_commands": 0.002115178894210648, "deviation-center-line": 0.07992996038386191, "driven_lanedir_consec": 0.6420558086598305, "sim_compute_sim_state": 0.00811466279920641, "sim_compute_performance-ego0": 0.0019515975491031185}, "LF-norm-small_loop-000-ego0": {"driven_any": 12.159280072220424, "get_ui_image": 0.025982067119275865, "step_physics": 0.08920734848526639, "survival_time": 51.39999999999922, "driven_lanedir": 0.8893886666380376, "get_state_dump": 0.004498786666766316, "get_robot_state": 0.0035424922946822425, "sim_render-ego0": 0.003736061072094679, "get_duckie_state": 1.873752234396828e-06, "in-drivable-lane": 48.1499999999992, "deviation-heading": 0.819270165039796, "agent_compute-ego0": 0.012988801831985933, "complete-iteration": 0.15021004250492137, "set_robot_commands": 0.002151474892455936, "deviation-center-line": 0.27091689859837204, "driven_lanedir_consec": 0.8893886666380376, "sim_compute_sim_state": 0.006129768545819905, "sim_compute_performance-ego0": 0.0018948873810911784}}
set_robot_commands_max0.002280219656522157
set_robot_commands_mean0.002159278330969669
set_robot_commands_median0.002133326893333292
set_robot_commands_min0.002090239880689934
sim_compute_performance-ego0_max0.0019515975491031185
sim_compute_performance-ego0_mean0.001900410380531581
sim_compute_performance-ego0_median0.001889105228216287
sim_compute_performance-ego0_min0.0018718335165906308
sim_compute_sim_state_max0.009812878138983428
sim_compute_sim_state_mean0.008396768016189063
sim_compute_sim_state_median0.00882221268997646
sim_compute_sim_state_min0.006129768545819905
sim_render-ego0_max0.0038047334650060634
sim_render-ego0_mean0.003733568177944839
sim_render-ego0_median0.0037344063497660417
sim_render-ego0_min0.003660726547241211
simulation-passed1
step_physics_max0.17236258944527047
step_physics_mean0.1219841974594775
step_physics_median0.11318342595368658
step_physics_min0.08920734848526639
survival_time_max51.39999999999922
survival_time_mean18.899999999999828
survival_time_min2.9999999999999973
No reset possible
52202LFv-simerrorno0:01:45
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139939768062448
- M:video_aido:cmdline(in:/;out:/) 139939768065184
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52199LFv-simerrorno0:02:47
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139962615949824
- M:video_aido:cmdline(in:/;out:/) 139962686630064
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41676LFv-simsuccessno0:10:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38100LFv-simsuccessno0:11:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38099LFv-simsuccessno0:12:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36252LFv-simsuccessno0:08:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36250LFv-simsuccessno0:15:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35696LFv-simsuccessno0:00:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35694LFv-simsuccessno0:01:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35314LFv-simerrorno0:21:46
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg05-b2dee9d94ee0-1-job35314/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35313LFv-simerrorno0:16:37
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9374/LFv-sim-reg02-1b92df2e7e91-1-job35313/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34894LFv-simsuccessno0:21:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34833LFv-simsuccessno0:22:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34832LFv-simsuccessno0:23:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible