Duckietown Challenges Home Challenges Submissions

Submission 6840

Submission6840
Competingyes
Challengeaido5-LF-sim-validation
UserDaniil Lisus
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 58558
Next
User labelexercise_ros_template
Admin priority50
Blessingn/a
User priority50

58558

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
58558LFv-simsuccessyes0:13:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.4568709307490939
survival_time_median17.700000000000117
deviation-center-line_median0.4647686883398102
in-drivable-lane_median10.125000000000076


other stats
agent_compute-ego0_max0.03839408474332305
agent_compute-ego0_mean0.029500955492540403
agent_compute-ego0_median0.03352869249272445
agent_compute-ego0_min0.01255235224138966
complete-iteration_max0.2430033537607427
complete-iteration_mean0.1929914518862452
complete-iteration_median0.18035106054922417
complete-iteration_min0.1682603326857899
deviation-center-line_max0.9947963672772452
deviation-center-line_mean0.5446282330299405
deviation-center-line_min0.25417918816289614
deviation-heading_max8.149082907717037
deviation-heading_mean3.131052945249568
deviation-heading_median1.78639194853619
deviation-heading_min0.8023449762088545
driven_any_max4.976909379783186
driven_any_mean2.684822528597156
driven_any_median2.32843333876435
driven_any_min1.1055140570767377
driven_lanedir_consec_max1.499279972227225
driven_lanedir_consec_mean0.6958607334381361
driven_lanedir_consec_min0.3704211000271316
driven_lanedir_max1.499279972227225
driven_lanedir_mean0.7128406383499386
driven_lanedir_median0.4589778893345453
driven_lanedir_min0.4341268025034386
get_duckie_state_max1.4142704269866528e-06
get_duckie_state_mean1.3128373429902242e-06
get_duckie_state_median1.3489240436833887e-06
get_duckie_state_min1.139230857607466e-06
get_robot_state_max0.004020740615574475
get_robot_state_mean0.0037742283734435527
get_robot_state_median0.003778317589521517
get_robot_state_min0.003519537699156703
get_state_dump_max0.005079078739280597
get_state_dump_mean0.004800248749927385
get_state_dump_median0.004809742069007
get_state_dump_min0.004502432122414942
get_ui_image_max0.040018116420880975
get_ui_image_mean0.03163574890885698
get_ui_image_median0.030054396625336013
get_ui_image_min0.02641608596387492
in-drivable-lane_max29.050000000000008
in-drivable-lane_mean13.58750000000004
in-drivable-lane_min5.049999999999999
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.2648348961257008, "get_ui_image": 0.030015590975864776, "step_physics": 0.09924930004985784, "survival_time": 24.40000000000021, "driven_lanedir": 1.499279972227225, "get_state_dump": 0.004893470151780329, "get_robot_state": 0.003969056962214121, "sim_render-ego0": 0.0039910017103261496, "get_duckie_state": 1.4056457332306845e-06, "in-drivable-lane": 12.800000000000129, "deviation-heading": 2.648486000778844, "agent_compute-ego0": 0.03158039403108, "complete-iteration": 0.18904406356421471, "set_robot_commands": 0.0024358475378929716, "deviation-center-line": 0.9947963672772452, "driven_lanedir_consec": 1.499279972227225, "sim_compute_sim_state": 0.010675651164142631, "sim_compute_performance-ego0": 0.002136044219471438}, "LF-norm-zigzag-000-ego0": {"driven_any": 4.976909379783186, "get_ui_image": 0.040018116420880975, "step_physics": 0.13374006715717368, "survival_time": 36.650000000000055, "driven_lanedir": 0.4383407196743414, "get_state_dump": 0.005079078739280597, "get_robot_state": 0.004020740615574475, "sim_render-ego0": 0.004235602529562137, "get_duckie_state": 1.4142704269866528e-06, "in-drivable-lane": 29.050000000000008, "deviation-heading": 8.149082907717037, "agent_compute-ego0": 0.03839408474332305, "complete-iteration": 0.2430033537607427, "set_robot_commands": 0.0025268325363907566, "deviation-center-line": 0.6692052120080175, "driven_lanedir_consec": 0.3704211000271316, "sim_compute_sim_state": 0.012665704745362823, "sim_compute_performance-ego0": 0.0022221868629351623}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.1055140570767377, "get_ui_image": 0.03009320227480725, "step_physics": 0.10513796595578694, "survival_time": 8.999999999999993, "driven_lanedir": 0.4796150589947492, "get_state_dump": 0.004502432122414942, "get_robot_state": 0.003519537699156703, "sim_render-ego0": 0.003628165682376419, "get_duckie_state": 1.292202354136093e-06, "in-drivable-lane": 5.049999999999999, "deviation-heading": 0.9242978962935354, "agent_compute-ego0": 0.01255235224138966, "complete-iteration": 0.17165805753423363, "set_robot_commands": 0.0022123623948070883, "deviation-center-line": 0.2603321646716029, "driven_lanedir_consec": 0.4796150589947492, "sim_compute_sim_state": 0.008053733499010623, "sim_compute_performance-ego0": 0.00187114457399147}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.3920317814029988, "get_ui_image": 0.02641608596387492, "step_physics": 0.08502626311185672, "survival_time": 11.00000000000002, "driven_lanedir": 0.4341268025034386, "get_state_dump": 0.004726013986233672, "get_robot_state": 0.0035875782168289117, "sim_render-ego0": 0.003677997114431804, "get_duckie_state": 1.139230857607466e-06, "in-drivable-lane": 7.450000000000026, "deviation-heading": 0.8023449762088545, "agent_compute-ego0": 0.0354769909543689, "complete-iteration": 0.1682603326857899, "set_robot_commands": 0.0022125244140625, "deviation-center-line": 0.25417918816289614, "driven_lanedir_consec": 0.4341268025034386, "sim_compute_sim_state": 0.005115710772000826, "sim_compute_performance-ego0": 0.001932698677028466}}
set_robot_commands_max0.0025268325363907566
set_robot_commands_mean0.002346891720788329
set_robot_commands_median0.0023241859759777356
set_robot_commands_min0.0022123623948070883
sim_compute_performance-ego0_max0.0022221868629351623
sim_compute_performance-ego0_mean0.002040518583356634
sim_compute_performance-ego0_median0.002034371448249952
sim_compute_performance-ego0_min0.00187114457399147
sim_compute_sim_state_max0.012665704745362823
sim_compute_sim_state_mean0.009127700045129227
sim_compute_sim_state_median0.009364692331576628
sim_compute_sim_state_min0.005115710772000826
sim_render-ego0_max0.004235602529562137
sim_render-ego0_mean0.003883191759174127
sim_render-ego0_median0.0038344994123789768
sim_render-ego0_min0.003628165682376419
simulation-passed1
step_physics_max0.13374006715717368
step_physics_mean0.1057883990686688
step_physics_median0.1021936330028224
step_physics_min0.08502626311185672
survival_time_max36.650000000000055
survival_time_mean20.26250000000007
survival_time_min8.999999999999993
No reset possible
58555LFv-simsuccessyes0:07:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58554LFv-simsuccessyes0:08:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58552LFv-simsuccessyes0:07:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
58550LFv-simsuccessyes0:09:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52504LFv-simerrorno0:03:37
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139977385594640
- M:video_aido:cmdline(in:/;out:/) 139977385592672
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
52498LFv-simerrorno0:02:44
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139622729560368
- M:video_aido:cmdline(in:/;out:/) 139622729073424
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
41806LFv-simsuccessno0:07:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
38361LFv-simsuccessno0:08:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
36451LFv-simsuccessno0:10:56
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35868LFv-simsuccessno0:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35451LFv-simerrorno0:12:23
The result file is n [...]
The result file is not found in working dir /tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451:

File '/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451/challenge-results/challenge_results.yaml' does not exist.

This usually means that the evaluator did not finish and some times that there was an import error.
Check the evaluator log to see what happened.

List of all files:

 -/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451/docker-compose.original.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451/docker-compose.yaml
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451/logs/challenges-runner/stdout.log
-/tmp/duckietown/DT18/evaluator/executions/aido5-LF-sim-validation/submission6840/LFv-sim-reg02-1b92df2e7e91-1-job35451/logs/challenges-runner/stderr.log
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
35135LFv-simsuccessno0:17:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33627LFv-simsuccessno0:15:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33429LFv-simsuccessno0:10:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
33428LFv-simsuccessno0:10:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible