Duckietown Challenges Home Challenges Submissions

Submission 11164

Submission11164
Competingyes
Challengeaido5-LF-sim-validation
UserMoustafa Elarabi
Date submitted
Last status update
Completecomplete
DetailsEvaluation is complete.
Sisters
Result💚
JobsLFv-sim: 55618
Next
User labeltemplate-pytorch
Admin priority50
Blessingn/a
User priority50

55618

Click the images to see detailed statistics about the episode.

LF-norm-loop-000

LF-norm-small_loop-000

LF-norm-techtrack-000

LF-norm-zigzag-000

Evaluation jobs for this submission

Show only up-to-date jobs
Job IDstepstatusup to datedate starteddate completeddurationmessage
55618LFv-simsuccessyes0:42:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median-0.30593328945898757
survival_time_median59.99999999999873
deviation-center-line_median3.0159896466082436
in-drivable-lane_median6.574999999999626


other stats
agent_compute-ego0_max0.01970402426167789
agent_compute-ego0_mean0.0191138263149722
agent_compute-ego0_median0.019161096917501
agent_compute-ego0_min0.01842908716320892
complete-iteration_max0.31086786799784205
complete-iteration_mean0.2691426922836272
complete-iteration_median0.26918002588365797
complete-iteration_min0.2273428493693508
deviation-center-line_max5.4057993141459875
deviation-center-line_mean3.4461227199309583
deviation-center-line_min2.346712272361359
deviation-heading_max23.955033415898495
deviation-heading_mean21.58376045116919
deviation-heading_median20.933957918795834
deviation-heading_min20.51209255118663
driven_any_max0.4573204552482057
driven_any_mean0.4047870140840405
driven_any_median0.4308776982749589
driven_any_min0.3000722045380381
driven_lanedir_consec_max-0.24463816903870095
driven_lanedir_consec_mean-0.3026548857794611
driven_lanedir_consec_min-0.35411479516116817
driven_lanedir_max-0.24463816903870095
driven_lanedir_mean-0.3026548857794611
driven_lanedir_median-0.30593328945898757
driven_lanedir_min-0.35411479516116817
get_duckie_state_max1.1805789258259718e-06
get_duckie_state_mean1.1345230470986092e-06
get_duckie_state_median1.139188189986941e-06
get_duckie_state_min1.0791368825945823e-06
get_robot_state_max0.0036345302413444934
get_robot_state_mean0.003529896644827329
get_robot_state_median0.0035296577299564307
get_robot_state_min0.003425740878051961
get_state_dump_max0.005075139665881561
get_state_dump_mean0.004622206947984148
get_state_dump_median0.004565201234460175
get_state_dump_min0.0042832856571346795
get_ui_image_max0.03286561997705058
get_ui_image_mean0.02828880422419057
get_ui_image_median0.02782089287394985
get_ui_image_min0.024647811171811983
in-drivable-lane_max19.89999999999887
in-drivable-lane_mean8.574999999999513
in-drivable-lane_min1.249999999999929
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 0.4573204552482057, "get_ui_image": 0.024763606966385535, "step_physics": 0.1749534972204356, "survival_time": 59.99999999999873, "driven_lanedir": -0.26350437654951797, "get_state_dump": 0.0042832856571346795, "get_robot_state": 0.003425740878051961, "sim_render-ego0": 0.0036752325212032378, "get_duckie_state": 1.1275749619457745e-06, "in-drivable-lane": 19.89999999999887, "deviation-heading": 20.51209255118663, "agent_compute-ego0": 0.01970402426167789, "complete-iteration": 0.24197376479117896, "set_robot_commands": 0.002140238719816311, "deviation-center-line": 2.813608284637264, "driven_lanedir_consec": -0.26350437654951797, "sim_compute_sim_state": 0.007046563937006941, "sim_compute_performance-ego0": 0.0018997579490413871}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.3000722045380381, "get_ui_image": 0.03286561997705058, "step_physics": 0.23072995849691957, "survival_time": 59.99999999999873, "driven_lanedir": -0.24463816903870095, "get_state_dump": 0.005075139665881561, "get_robot_state": 0.003553794484452146, "sim_render-ego0": 0.0038307546874466383, "get_duckie_state": 1.150801418028108e-06, "in-drivable-lane": 6.899999999999608, "deviation-heading": 20.66815517463126, "agent_compute-ego0": 0.018788361728042487, "complete-iteration": 0.31086786799784205, "set_robot_commands": 0.0022975401914089944, "deviation-center-line": 5.4057993141459875, "driven_lanedir_consec": -0.24463816903870095, "sim_compute_sim_state": 0.011683361218632707, "sim_compute_performance-ego0": 0.0019591777747517123}, "LF-norm-techtrack-000-ego0": {"driven_any": 0.4113845386648501, "get_ui_image": 0.030878178781514165, "step_physics": 0.221197393911268, "survival_time": 59.99999999999873, "driven_lanedir": -0.34836220236845716, "get_state_dump": 0.004647499516445036, "get_robot_state": 0.0036345302413444934, "sim_render-ego0": 0.003840001596201469, "get_duckie_state": 1.1805789258259718e-06, "in-drivable-lane": 1.249999999999929, "deviation-heading": 23.955033415898495, "agent_compute-ego0": 0.01953383210695951, "complete-iteration": 0.29638628697613695, "set_robot_commands": 0.0022414860181467025, "deviation-center-line": 3.218371008579223, "driven_lanedir_consec": -0.34836220236845716, "sim_compute_sim_state": 0.008376203508400897, "sim_compute_performance-ego0": 0.001947922472354276}, "LF-norm-small_loop-000-ego0": {"driven_any": 0.45037085788506775, "get_ui_image": 0.024647811171811983, "step_physics": 0.16207089888662424, "survival_time": 59.99999999999873, "driven_lanedir": -0.35411479516116817, "get_state_dump": 0.0044829029524753134, "get_robot_state": 0.0035055209754607164, "sim_render-ego0": 0.0037408754490098786, "get_duckie_state": 1.0791368825945823e-06, "in-drivable-lane": 6.249999999999645, "deviation-heading": 21.199760662960404, "agent_compute-ego0": 0.01842908716320892, "complete-iteration": 0.2273428493693508, "set_robot_commands": 0.0022213306157019214, "deviation-center-line": 2.346712272361359, "driven_lanedir_consec": -0.35411479516116817, "sim_compute_sim_state": 0.006267992483388375, "sim_compute_performance-ego0": 0.0018926456905622269}}
set_robot_commands_max0.0022975401914089944
set_robot_commands_mean0.0022251488862684824
set_robot_commands_median0.002231408316924312
set_robot_commands_min0.002140238719816311
sim_compute_performance-ego0_max0.0019591777747517123
sim_compute_performance-ego0_mean0.0019248759716774008
sim_compute_performance-ego0_median0.0019238402106978316
sim_compute_performance-ego0_min0.0018926456905622269
sim_compute_sim_state_max0.011683361218632707
sim_compute_sim_state_mean0.00834353028685723
sim_compute_sim_state_median0.007711383722703919
sim_compute_sim_state_min0.006267992483388375
sim_render-ego0_max0.003840001596201469
sim_render-ego0_mean0.003771716063465306
sim_render-ego0_median0.003785815068228258
sim_render-ego0_min0.0036752325212032378
simulation-passed1
step_physics_max0.23072995849691957
step_physics_mean0.19723793712881185
step_physics_median0.1980754455658518
step_physics_min0.16207089888662424
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
50239LFv-simerrorno0:10:12
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
    im = Image.open(filename)
  File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
    result = block.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
    image = imread(self.config.file)
  File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
    raise ValueError(msg) from e
ValueError: Could not open filename "banner1.png".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 329, in main
    make_video2(
  File "/usr/local/lib/python3.8/site-packages/aido_analyze/utils_video.py", line 149, in make_video2
    pg("video_aido", params)
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 280, in pg
    raise e
  File "/usr/local/lib/python3.8/site-packages/procgraph/scripts/pgmain.py", line 277, in pg
    model.update()
  File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 321, in update
    raise BadMethodCall("update", block, traceback.format_exc())
procgraph.core.exceptions.BadMethodCall: User-thrown exception while calling update() in block 'static_image'.
- B:StaticImage:static_image(in:/;out:rgb) 139807767210064
- M:video_aido:cmdline(in:/;out:/) 139807767209872
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 51, in imread
>     im = Image.open(filename)
>   File "/usr/local/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
>     raise UnidentifiedImageError(
> PIL.UnidentifiedImageError: cannot identify image file 'banner1.png'
> 
> The above exception was the direct cause of the following exception:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.8/site-packages/procgraph/core/model.py", line 316, in update
>     result = block.update()
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 31, in update
>     image = imread(self.config.file)
>   File "/usr/local/lib/python3.8/site-packages/procgraph_pil/imread_imp.py", line 54, in imread
>     raise ValueError(msg) from e
> ValueError: Could not open filename "banner1.png".

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
40120LFv-simsuccessno0:08:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39567LFv-simerrorno1:00:52
Waited 3609 seconds [...]
Waited 3609 seconds for container to finish. Giving up. 
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39140LFv-simhost-errorno0:00:36
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 678, in scoring_context
    yield cie
  File "experiment_manager.py", line 683, in go
    wrap(cie)
  File "experiment_manager.py", line 668, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "experiment_manager.py", line 119, in main
    raise InvalidEnvironment(msg=msg, lf=list_all_files("/fifos"))
duckietown_challenges.exceptions.InvalidEnvironment: Path /fifos/runner does not exist
│ lf: [/fifos/experiment_manager]
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39127LFv-simhost-errorno0:00:18
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 687, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 888, in run_single
    write_logs(wd, project, services=config["services"])
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1388, in write_logs
    services2id: Dict[str, str] = get_services_id(wd, project, services)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 937, in get_services_id
    container = client.containers.get(container_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/models/containers.py", line 880, in get
    resp = self.client.api.inspect_container(container_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/utils/decorators.py", line 16, in wrapped
    raise errors.NullResource(
docker.errors.NullResource: Resource ID was not provided
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39053LFv-simsuccessno0:08:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
39052LFv-simsuccessno0:08:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible