Duckietown Challenges Home Challenges Submissions

Submission 9350

Submission9350
Competingyes
Challengeaido5-LF-sim-validation
UserHimanshu Arora 🇨🇦
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58157
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58157

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58157LFv-simsuccessyes0:09:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.483284904565765
survival_time_median7.999999999999988
deviation-center-line_median0.2123889949302954
in-drivable-lane_median4.92499999999999


other stats
agent_compute-ego0_max0.0127477970123291
agent_compute-ego0_mean0.012520679031585396
agent_compute-ego0_median0.012577177495093746
agent_compute-ego0_min0.012180564123824988
complete-iteration_max0.3540329016171969
complete-iteration_mean0.295351578870162
complete-iteration_median0.28994727257759345
complete-iteration_min0.24747886870826424
deviation-center-line_max0.2581874853258639
deviation-center-line_mean0.19824446086393763
deviation-center-line_min0.11001236826929592
deviation-heading_max2.038580855786934
deviation-heading_mean1.278201374407114
deviation-heading_median1.2516103163424608
deviation-heading_min0.5710040091566019
driven_any_max3.0209645678401085
driven_any_mean1.7526206653118468
driven_any_median1.4591195969581754
driven_any_min1.0712788994909277
driven_lanedir_consec_max0.6406065458240694
driven_lanedir_consec_mean0.4708173126028749
driven_lanedir_consec_min0.2760928954559001
driven_lanedir_max0.6406065458240694
driven_lanedir_mean0.4708173126028749
driven_lanedir_median0.483284904565765
driven_lanedir_min0.2760928954559001
get_duckie_state_max2.2922814225351345e-06
get_duckie_state_mean2.2281525010820344e-06
get_duckie_state_median2.2554004969511885e-06
get_duckie_state_min2.109527587890625e-06
get_robot_state_max0.003975945903408912
get_robot_state_mean0.003881298736607506
get_robot_state_median0.0038757430827817433
get_robot_state_min0.0037977628774576258
get_state_dump_max0.005069237488966722
get_state_dump_mean0.004911983652163916
get_state_dump_median0.004902717313458843
get_state_dump_min0.004773262492771256
get_ui_image_max0.03688702716694012
get_ui_image_mean0.03131971921196537
get_ui_image_median0.030144266359267695
get_ui_image_min0.028103316962385976
in-drivable-lane_max13.000000000000068
in-drivable-lane_mean6.36250000000001
in-drivable-lane_min2.5999999999999908
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.0209645678401085, "get_ui_image": 0.029647026523467034, "step_physics": 0.21314401395859256, "survival_time": 15.450000000000085, "driven_lanedir": 0.2760928954559001, "get_state_dump": 0.00495269529281124, "get_robot_state": 0.003975945903408912, "sim_render-ego0": 0.003978008608664236, "get_duckie_state": 2.2249837075510332e-06, "in-drivable-lane": 13.000000000000068, "deviation-heading": 1.8630727585729956, "agent_compute-ego0": 0.012584722426629835, "complete-iteration": 0.28354499109329717, "set_robot_commands": 0.002348668344559208, "deviation-center-line": 0.2581874853258639, "driven_lanedir_consec": 0.2760928954559001, "sim_compute_sim_state": 0.0106358858846849, "sim_compute_performance-ego0": 0.002179378847922048}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.2704403387308147, "get_ui_image": 0.03688702716694012, "step_physics": 0.2765487524179312, "survival_time": 7.099999999999983, "driven_lanedir": 0.35354625337586354, "get_state_dump": 0.005069237488966722, "get_robot_state": 0.0037977628774576258, "sim_render-ego0": 0.0039478765501009, "get_duckie_state": 2.285817286351344e-06, "in-drivable-lane": 4.299999999999986, "deviation-heading": 2.038580855786934, "agent_compute-ego0": 0.012569632563557658, "complete-iteration": 0.3540329016171969, "set_robot_commands": 0.0022686294742397493, "deviation-center-line": 0.22412671158414732, "driven_lanedir_consec": 0.35354625337586354, "sim_compute_sim_state": 0.010757743061839284, "sim_compute_performance-ego0": 0.002090230688348517}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0712788994909277, "get_ui_image": 0.03064150619506836, "step_physics": 0.22761832809448243, "survival_time": 6.199999999999986, "driven_lanedir": 0.6406065458240694, "get_state_dump": 0.004852739334106446, "get_robot_state": 0.003923023223876953, "sim_render-ego0": 0.003852811813354492, "get_duckie_state": 2.109527587890625e-06, "in-drivable-lane": 2.5999999999999908, "deviation-heading": 0.6401478741119259, "agent_compute-ego0": 0.0127477970123291, "complete-iteration": 0.2963495540618897, "set_robot_commands": 0.002284395217895508, "deviation-center-line": 0.11001236826929592, "driven_lanedir_consec": 0.6406065458240694, "sim_compute_sim_state": 0.008277698516845703, "sim_compute_performance-ego0": 0.002058052062988281}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.647798855185536, "get_ui_image": 0.028103316962385976, "step_physics": 0.18493415923091952, "survival_time": 8.899999999999991, "driven_lanedir": 0.6130235557556665, "get_state_dump": 0.004773262492771256, "get_robot_state": 0.0038284629416865342, "sim_render-ego0": 0.00386851193518612, "get_duckie_state": 2.2922814225351345e-06, "in-drivable-lane": 5.549999999999995, "deviation-heading": 0.5710040091566019, "agent_compute-ego0": 0.012180564123824988, "complete-iteration": 0.24747886870826424, "set_robot_commands": 0.002303435149805506, "deviation-center-line": 0.2006512782764435, "driven_lanedir_consec": 0.6130235557556665, "sim_compute_sim_state": 0.005328033223498467, "sim_compute_performance-ego0": 0.002063756548492602}}
set_robot_commands_max0.002348668344559208
set_robot_commands_mean0.002301282046624993
set_robot_commands_median0.002293915183850507
set_robot_commands_min0.0022686294742397493
sim_compute_performance-ego0_max0.002179378847922048
sim_compute_performance-ego0_mean0.002097854536937862
sim_compute_performance-ego0_median0.0020769936184205596
sim_compute_performance-ego0_min0.002058052062988281
sim_compute_sim_state_max0.010757743061839284
sim_compute_sim_state_mean0.008749840171717089
sim_compute_sim_state_median0.009456792200765302
sim_compute_sim_state_min0.005328033223498467
sim_render-ego0_max0.003978008608664236
sim_render-ego0_mean0.003911802226826437
sim_render-ego0_median0.00390819424264351
sim_render-ego0_min0.003852811813354492
simulation-passed1
step_physics_max0.2765487524179312
step_physics_mean0.2255613134254814
step_physics_median0.22038117102653748
step_physics_min0.18493415923091952
survival_time_max15.450000000000085
survival_time_mean9.412500000000012
survival_time_min6.199999999999986
No reset possible
58146LFv-simtimeoutyes----No reset possible
52215LFv-simerrorno0:03:34
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139710700702976
- M:video_aido:cmdline(in:/;out:/) 139710700110320
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41694LFv-simsuccessno0:06:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38141LFv-simerrorno0:00:44
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner_daffy-6.0.49-py3.8.egg/duckietown_challenges_runner/runner.py", line 1088, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/media/sdb/duckietown-tmp-evals/production/aido5-LF-sim-validation/submission9350/LFv-sim-mont02-80325a328f54-1-job38141-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36284LFv-simsuccessno0:12:54
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36282LFv-simsuccessno0:11:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35721LFv-simerrorno0:00:48
The container "solut [...]
The container "solution" exited with code 1.


Look at the logs for the container to know more about the error.

No results found! Something very wrong. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1063, in run_one
    cr = read_challenge_results(wd)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges/challenge_results.py", line 90, in read_challenge_results
    raise NoResultsFound(msg)
duckietown_challenges.challenge_results.NoResultsFound: File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-noname-sandy-1-job35721-a-wd/challenge-results/challenge_results.yaml' does not exist.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35337LFv-simerrorno0:13:47
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission9350/LFv-sim-reg05-b2dee9d94ee0-1-job35337/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34973LFv-simsuccessno0:14:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34734LFv-simsuccessno0:16:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
34733LFv-simsuccessno0:16:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible