Duckietown Challenges Home Challenges Submissions

Submission 9378

Submission9378
Competingyes
Challengeaido5-LF-sim-validation
UserYishu Malhotra 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58127
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58127

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58127LFv-simsuccessyes0:09:15
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6623705970796556
survival_time_median11.375000000000028
deviation-center-line_median0.10975204729996828
in-drivable-lane_median8.725000000000028


other stats
agent_compute-ego0_max0.013547622368310802
agent_compute-ego0_mean0.012736831140274471
agent_compute-ego0_median0.012701287685762674
agent_compute-ego0_min0.011997126821261733
complete-iteration_max0.2438538323587446
complete-iteration_mean0.1947453227883219
complete-iteration_median0.18574612979216
complete-iteration_min0.16363519921022304
deviation-center-line_max0.6641603488159539
deviation-center-line_mean0.24321272180514408
deviation-center-line_min0.0891864438046858
deviation-heading_max3.1430435112181
deviation-heading_mean1.3746753650795092
deviation-heading_median0.9385619497226724
deviation-heading_min0.4785340496545919
driven_any_max5.802025681192019
driven_any_mean2.755334109502341
driven_any_median2.4581491321316946
driven_any_min0.30301249255395574
driven_lanedir_consec_max2.469969700839292
driven_lanedir_consec_mean0.9840834930089776
driven_lanedir_consec_min0.14162307703730592
driven_lanedir_max2.469969700839292
driven_lanedir_mean0.9840834930089776
driven_lanedir_median0.6623705970796556
driven_lanedir_min0.14162307703730592
get_duckie_state_max1.496427199419807e-06
get_duckie_state_mean1.3823738765402522e-06
get_duckie_state_median1.3813443568089755e-06
get_duckie_state_min1.2703795931232508e-06
get_robot_state_max0.003828114333468447
get_robot_state_mean0.003695343844383945
get_robot_state_median0.003738359805152982
get_robot_state_min0.003476541433761369
get_state_dump_max0.0048833599489325015
get_state_dump_mean0.004751952026290589
get_state_dump_median0.004773300743395397
get_state_dump_min0.00457784666943906
get_ui_image_max0.035202218525445285
get_ui_image_mean0.03172843017129203
get_ui_image_median0.03209231843686861
get_ui_image_min0.02752686528598561
in-drivable-lane_max19.35000000000027
in-drivable-lane_mean9.51250000000008
in-drivable-lane_min1.2499999999999956
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 5.802025681192019, "get_ui_image": 0.02990160198087413, "step_physics": 0.11027141463873052, "survival_time": 30.6500000000003, "driven_lanedir": 2.469969700839292, "get_state_dump": 0.004768101322534418, "get_robot_state": 0.003727507513586783, "sim_render-ego0": 0.0038871629230362585, "get_duckie_state": 1.4110962808714628e-06, "in-drivable-lane": 19.35000000000027, "deviation-heading": 3.1430435112181, "agent_compute-ego0": 0.01263742726478204, "complete-iteration": 0.18055462254763427, "set_robot_commands": 0.0021987803984154317, "deviation-center-line": 0.6641603488159539, "driven_lanedir_consec": 2.469969700839292, "sim_compute_sim_state": 0.011073162967296688, "sim_compute_performance-ego0": 0.0019991894886625707}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.30301249255395574, "get_ui_image": 0.035202218525445285, "step_physics": 0.1711278210824995, "survival_time": 3.2999999999999963, "driven_lanedir": 0.14162307703730592, "get_state_dump": 0.00457784666943906, "get_robot_state": 0.003476541433761369, "sim_render-ego0": 0.003680303915223079, "get_duckie_state": 1.2703795931232508e-06, "in-drivable-lane": 1.2499999999999956, "deviation-heading": 1.374839785306067, "agent_compute-ego0": 0.011997126821261733, "complete-iteration": 0.2438538323587446, "set_robot_commands": 0.0023797981774629053, "deviation-center-line": 0.1232734842843482, "driven_lanedir_consec": 0.14162307703730592, "sim_compute_sim_state": 0.009431266072970715, "sim_compute_performance-ego0": 0.001892189481365147}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.1277185440341935, "get_ui_image": 0.0342830348928631, "step_physics": 0.11543289211153568, "survival_time": 14.300000000000068, "driven_lanedir": 0.6503029435266947, "get_state_dump": 0.0048833599489325015, "get_robot_state": 0.003828114333468447, "sim_render-ego0": 0.004134198929790005, "get_duckie_state": 1.3515924327464884e-06, "in-drivable-lane": 11.75000000000007, "deviation-heading": 0.4785340496545919, "agent_compute-ego0": 0.013547622368310802, "complete-iteration": 0.19093763703668573, "set_robot_commands": 0.0023325810449048618, "deviation-center-line": 0.0891864438046858, "driven_lanedir_consec": 0.6503029435266947, "sim_compute_sim_state": 0.010357840966679908, "sim_compute_performance-ego0": 0.0020442997537008147}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.7885797202291958, "get_ui_image": 0.02752686528598561, "step_physics": 0.10129949625800638, "survival_time": 8.449999999999985, "driven_lanedir": 0.6744382506326165, "get_state_dump": 0.0047785001642563765, "get_robot_state": 0.003749212096719181, "sim_render-ego0": 0.0039778414894552794, "get_duckie_state": 1.496427199419807e-06, "in-drivable-lane": 5.699999999999987, "deviation-heading": 0.5022841141392779, "agent_compute-ego0": 0.012765148106743307, "complete-iteration": 0.16363519921022304, "set_robot_commands": 0.0022326567593742817, "deviation-center-line": 0.0962306103155884, "driven_lanedir_consec": 0.6744382506326165, "sim_compute_sim_state": 0.005227971076965332, "sim_compute_performance-ego0": 0.0019869565963745116}}
set_robot_commands_max0.0023797981774629053
set_robot_commands_mean0.0022859540950393702
set_robot_commands_median0.0022826189021395715
set_robot_commands_min0.0021987803984154317
sim_compute_performance-ego0_max0.0020442997537008147
sim_compute_performance-ego0_mean0.001980658830025761
sim_compute_performance-ego0_median0.001993073042518542
sim_compute_performance-ego0_min0.001892189481365147
sim_compute_sim_state_max0.011073162967296688
sim_compute_sim_state_mean0.00902256027097816
sim_compute_sim_state_median0.009894553519825312
sim_compute_sim_state_min0.005227971076965332
sim_render-ego0_max0.004134198929790005
sim_render-ego0_mean0.003919876814376156
sim_render-ego0_median0.003932502206245769
sim_render-ego0_min0.003680303915223079
simulation-passed1
step_physics_max0.1711278210824995
step_physics_mean0.124532906022693
step_physics_median0.1128521533751331
step_physics_min0.10129949625800638
survival_time_max30.6500000000003
survival_time_mean14.17500000000009
survival_time_min3.2999999999999963
No reset possible
58124LFv-simsuccessyes0:11:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52178LFv-simerrorno0:05:25
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 140695391469424
- M:video_aido:cmdline(in:/;out:/) 140695391468416
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41674LFv-simsuccessno0:07:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38091LFv-simsuccessno0:08:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36249LFv-simsuccessno0:11:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35688LFv-simsuccessno0:01:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35310LFv-simerrorno0:16:26
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9378/LFv-sim-reg04-c054faef3177-1-job35310/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34958LFv-simsuccessno0:17:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34891LFv-simtimeoutno1:05:51
I can see how the jo [...]
I can see how the job 34891 is timeout because passed 3951 seconds and the timeout is 3600.0.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34846LFv-simsuccessno0:16:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34845LFv-simsuccessno0:13:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible