Duckietown Challenges Home Challenges Submissions

Evaluator 5178

ID5178
evaluatorgpu-production-spot-0-01
ownerI don't have one πŸ˜€
machinegpu-production-spot-0_f06c83e65cd4
processgpu-production-spot-0-01_f06c83e65cd4
version6.2.7
first heard
last heard
statusinactive
# evaluating
# success129 70625
# timeout1 71756
# failed23 75097
# error
# aborted4 70960
# host-error6 70644
arm0
x86_641
Mac0
gpu available1
Number of processors64
Processor frequency (MHz)0.0 GHz
Free % of processors99%
RAM total (MB)249.0 GB
RAM free (MB)186.5 GB
Disk (MB)969.3 GB
Disk available (MB)588.4 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
7547014887Liam Hanexercises_braitenbergmooc-BV1sim-2of5successnogpu-production-spot-0-010:02:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.4724860827843647


other stats
agent_compute-ego0_max0.01154131267381751
agent_compute-ego0_mean0.01154131267381751
agent_compute-ego0_median0.01154131267381751
agent_compute-ego0_min0.01154131267381751
complete-iteration_max0.2224106508752574
complete-iteration_mean0.2224106508752574
complete-iteration_median0.2224106508752574
complete-iteration_min0.2224106508752574
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.4724860827843647
distance-from-start_median2.4724860827843647
distance-from-start_min2.4724860827843647
driven_any_max2.9689906018052854
driven_any_mean2.9689906018052854
driven_any_median2.9689906018052854
driven_any_min2.9689906018052854
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.08138685019119926
get_duckie_state_mean0.08138685019119926
get_duckie_state_median0.08138685019119926
get_duckie_state_min0.08138685019119926
get_robot_state_max0.003856234965117082
get_robot_state_mean0.003856234965117082
get_robot_state_median0.003856234965117082
get_robot_state_min0.003856234965117082
get_state_dump_max0.017555094801861307
get_state_dump_mean0.017555094801861307
get_state_dump_median0.017555094801861307
get_state_dump_min0.017555094801861307
get_ui_image_max0.015084051049273949
get_ui_image_mean0.015084051049273949
get_ui_image_median0.015084051049273949
get_ui_image_min0.015084051049273949
in-drivable-lane_max11.450000000000028
in-drivable-lane_mean11.450000000000028
in-drivable-lane_median11.450000000000028
in-drivable-lane_min11.450000000000028
per-episodes
details{"d40-ego0": {"driven_any": 2.9689906018052854, "get_ui_image": 0.015084051049273949, "step_physics": 0.07410022072170092, "survival_time": 11.450000000000028, "driven_lanedir": 0.0, "get_state_dump": 0.017555094801861307, "get_robot_state": 0.003856234965117082, "sim_render-ego0": 0.003903460502624511, "get_duckie_state": 0.08138685019119926, "in-drivable-lane": 11.450000000000028, "deviation-heading": 0.0, "agent_compute-ego0": 0.01154131267381751, "complete-iteration": 0.2224106508752574, "set_robot_commands": 0.0023531198501586916, "distance-from-start": 2.4724860827843647, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010560787242391834, "sim_compute_performance-ego0": 0.0019705285196718963}}
set_robot_commands_max0.0023531198501586916
set_robot_commands_mean0.0023531198501586916
set_robot_commands_median0.0023531198501586916
set_robot_commands_min0.0023531198501586916
sim_compute_performance-ego0_max0.0019705285196718963
sim_compute_performance-ego0_mean0.0019705285196718963
sim_compute_performance-ego0_median0.0019705285196718963
sim_compute_performance-ego0_min0.0019705285196718963
sim_compute_sim_state_max0.010560787242391834
sim_compute_sim_state_mean0.010560787242391834
sim_compute_sim_state_median0.010560787242391834
sim_compute_sim_state_min0.010560787242391834
sim_render-ego0_max0.003903460502624511
sim_render-ego0_mean0.003903460502624511
sim_render-ego0_median0.003903460502624511
sim_render-ego0_min0.003903460502624511
simulation-passed1
step_physics_max0.07410022072170092
step_physics_mean0.07410022072170092
step_physics_median0.07410022072170092
step_physics_min0.07410022072170092
survival_time_max11.450000000000028
survival_time_mean11.450000000000028
survival_time_median11.450000000000028
survival_time_min11.450000000000028
No reset possible
7546514887Liam Hanexercises_braitenbergmooc-BV1sim-2of5successnogpu-production-spot-0-010:03:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean4.527714656250262


other stats
agent_compute-ego0_max0.011834713230502903
agent_compute-ego0_mean0.011834713230502903
agent_compute-ego0_median0.011834713230502903
agent_compute-ego0_min0.011834713230502903
complete-iteration_max0.2231101666130848
complete-iteration_mean0.2231101666130848
complete-iteration_median0.2231101666130848
complete-iteration_min0.2231101666130848
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max4.527714656250262
distance-from-start_median4.527714656250262
distance-from-start_min4.527714656250262
driven_any_max4.7408138924301575
driven_any_mean4.7408138924301575
driven_any_median4.7408138924301575
driven_any_min4.7408138924301575
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.08085919681348298
get_duckie_state_mean0.08085919681348298
get_duckie_state_median0.08085919681348298
get_duckie_state_min0.08085919681348298
get_robot_state_max0.0038759635756220514
get_robot_state_mean0.0038759635756220514
get_robot_state_median0.0038759635756220514
get_robot_state_min0.0038759635756220514
get_state_dump_max0.01780816920906553
get_state_dump_mean0.01780816920906553
get_state_dump_median0.01780816920906553
get_state_dump_min0.01780816920906553
get_ui_image_max0.015086067680506824
get_ui_image_mean0.015086067680506824
get_ui_image_median0.015086067680506824
get_ui_image_min0.015086067680506824
in-drivable-lane_max18.00000000000012
in-drivable-lane_mean18.00000000000012
in-drivable-lane_median18.00000000000012
in-drivable-lane_min18.00000000000012
per-episodes
details{"d40-ego0": {"driven_any": 4.7408138924301575, "get_ui_image": 0.015086067680506824, "step_physics": 0.07482908042844312, "survival_time": 18.00000000000012, "driven_lanedir": 0.0, "get_state_dump": 0.01780816920906553, "get_robot_state": 0.0038759635756220514, "sim_render-ego0": 0.0038533864589279047, "get_duckie_state": 0.08085919681348298, "in-drivable-lane": 18.00000000000012, "deviation-heading": 0.0, "agent_compute-ego0": 0.011834713230502903, "complete-iteration": 0.2231101666130848, "set_robot_commands": 0.0023216943661592015, "distance-from-start": 4.527714656250262, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010587058239035989, "sim_compute_performance-ego0": 0.0019507044900487335}}
set_robot_commands_max0.0023216943661592015
set_robot_commands_mean0.0023216943661592015
set_robot_commands_median0.0023216943661592015
set_robot_commands_min0.0023216943661592015
sim_compute_performance-ego0_max0.0019507044900487335
sim_compute_performance-ego0_mean0.0019507044900487335
sim_compute_performance-ego0_median0.0019507044900487335
sim_compute_performance-ego0_min0.0019507044900487335
sim_compute_sim_state_max0.010587058239035989
sim_compute_sim_state_mean0.010587058239035989
sim_compute_sim_state_median0.010587058239035989
sim_compute_sim_state_min0.010587058239035989
sim_render-ego0_max0.0038533864589279047
sim_render-ego0_mean0.0038533864589279047
sim_render-ego0_median0.0038533864589279047
sim_render-ego0_min0.0038533864589279047
simulation-passed1
step_physics_max0.07482908042844312
step_physics_mean0.07482908042844312
step_physics_median0.07482908042844312
step_physics_min0.07482908042844312
survival_time_max18.00000000000012
survival_time_mean18.00000000000012
survival_time_median18.00000000000012
survival_time_min18.00000000000012
No reset possible
7546014886Liam Hanexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-010:03:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.1574919173866296


other stats
agent_compute-ego0_max0.012193758539693006
agent_compute-ego0_mean0.012193758539693006
agent_compute-ego0_median0.012193758539693006
agent_compute-ego0_min0.012193758539693006
complete-iteration_max0.25265411573035695
complete-iteration_mean0.25265411573035695
complete-iteration_median0.25265411573035695
complete-iteration_min0.25265411573035695
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.1574919173866296
distance-from-start_median1.1574919173866296
distance-from-start_min1.1574919173866296
driven_any_max1.1847044790544805
driven_any_mean1.1847044790544805
driven_any_median1.1847044790544805
driven_any_min1.1847044790544805
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.10317638284320772
get_duckie_state_mean0.10317638284320772
get_duckie_state_median0.10317638284320772
get_duckie_state_min0.10317638284320772
get_robot_state_max0.003964999754480855
get_robot_state_mean0.003964999754480855
get_robot_state_median0.003964999754480855
get_robot_state_min0.003964999754480855
get_state_dump_max0.021367652393947136
get_state_dump_mean0.021367652393947136
get_state_dump_median0.021367652393947136
get_state_dump_min0.021367652393947136
get_ui_image_max0.016093555640580126
get_ui_image_mean0.016093555640580126
get_ui_image_median0.016093555640580126
get_ui_image_min0.016093555640580126
in-drivable-lane_max16.000000000000092
in-drivable-lane_mean16.000000000000092
in-drivable-lane_median16.000000000000092
in-drivable-lane_min16.000000000000092
per-episodes
details{"d50-ego0": {"driven_any": 1.1847044790544805, "get_ui_image": 0.016093555640580126, "step_physics": 0.07661566036141179, "survival_time": 16.000000000000092, "driven_lanedir": 0.0, "get_state_dump": 0.021367652393947136, "get_robot_state": 0.003964999754480855, "sim_render-ego0": 0.003995246976335472, "get_duckie_state": 0.10317638284320772, "in-drivable-lane": 16.000000000000092, "deviation-heading": 0.0, "agent_compute-ego0": 0.012193758539693006, "complete-iteration": 0.25265411573035695, "set_robot_commands": 0.002299511544058256, "distance-from-start": 1.1574919173866296, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010845006069290303, "sim_compute_performance-ego0": 0.0019973742999020395}}
set_robot_commands_max0.002299511544058256
set_robot_commands_mean0.002299511544058256
set_robot_commands_median0.002299511544058256
set_robot_commands_min0.002299511544058256
sim_compute_performance-ego0_max0.0019973742999020395
sim_compute_performance-ego0_mean0.0019973742999020395
sim_compute_performance-ego0_median0.0019973742999020395
sim_compute_performance-ego0_min0.0019973742999020395
sim_compute_sim_state_max0.010845006069290303
sim_compute_sim_state_mean0.010845006069290303
sim_compute_sim_state_median0.010845006069290303
sim_compute_sim_state_min0.010845006069290303
sim_render-ego0_max0.003995246976335472
sim_render-ego0_mean0.003995246976335472
sim_render-ego0_median0.003995246976335472
sim_render-ego0_min0.003995246976335472
simulation-passed1
step_physics_max0.07661566036141179
step_physics_mean0.07661566036141179
step_physics_median0.07661566036141179
step_physics_min0.07661566036141179
survival_time_max16.000000000000092
survival_time_mean16.000000000000092
survival_time_median16.000000000000092
survival_time_min16.000000000000092
No reset possible
7544914885Allen Francisexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-010:10:40
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean3.812824330484871


other stats
agent_compute-ego0_max0.011830061500415911
agent_compute-ego0_mean0.011830061500415911
agent_compute-ego0_median0.011830061500415911
agent_compute-ego0_min0.011830061500415911
complete-iteration_max0.23664491797962553
complete-iteration_mean0.23664491797962553
complete-iteration_median0.23664491797962553
complete-iteration_min0.23664491797962553
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max3.812824330484871
distance-from-start_median3.812824330484871
distance-from-start_min3.812824330484871
driven_any_max4.319170311311945
driven_any_mean4.319170311311945
driven_any_median4.319170311311945
driven_any_min4.319170311311945
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09595880182855432
get_duckie_state_mean0.09595880182855432
get_duckie_state_median0.09595880182855432
get_duckie_state_min0.09595880182855432
get_robot_state_max0.004028651239869994
get_robot_state_mean0.004028651239869994
get_robot_state_median0.004028651239869994
get_robot_state_min0.004028651239869994
get_state_dump_max0.01969790498382543
get_state_dump_mean0.01969790498382543
get_state_dump_median0.01969790498382543
get_state_dump_min0.01969790498382543
get_ui_image_max0.015618433861013852
get_ui_image_mean0.015618433861013852
get_ui_image_median0.015618433861013852
get_ui_image_min0.015618433861013852
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d45-ego0": {"driven_any": 4.319170311311945, "get_ui_image": 0.015618433861013852, "step_physics": 0.07265244693581409, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.01969790498382543, "get_robot_state": 0.004028651239869994, "sim_render-ego0": 0.0039314076267213845, "get_duckie_state": 0.09595880182855432, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.011830061500415911, "complete-iteration": 0.23664491797962553, "set_robot_commands": 0.0023944308418318394, "distance-from-start": 3.812824330484871, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008386975224071697, "sim_compute_performance-ego0": 0.002046121149436322}}
set_robot_commands_max0.0023944308418318394
set_robot_commands_mean0.0023944308418318394
set_robot_commands_median0.0023944308418318394
set_robot_commands_min0.0023944308418318394
sim_compute_performance-ego0_max0.002046121149436322
sim_compute_performance-ego0_mean0.002046121149436322
sim_compute_performance-ego0_median0.002046121149436322
sim_compute_performance-ego0_min0.002046121149436322
sim_compute_sim_state_max0.008386975224071697
sim_compute_sim_state_mean0.008386975224071697
sim_compute_sim_state_median0.008386975224071697
sim_compute_sim_state_min0.008386975224071697
sim_render-ego0_max0.0039314076267213845
sim_render-ego0_mean0.0039314076267213845
sim_render-ego0_median0.0039314076267213845
sim_render-ego0_min0.0039314076267213845
simulation-passed1
step_physics_max0.07265244693581409
step_physics_mean0.07265244693581409
step_physics_median0.07265244693581409
step_physics_min0.07265244693581409
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7544814884Juan Ramirezexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-010:01:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean0.7516647109419023


other stats
agent_compute-ego0_max0.011051576311995343
agent_compute-ego0_mean0.011051576311995343
agent_compute-ego0_median0.011051576311995343
agent_compute-ego0_min0.011051576311995343
complete-iteration_max0.2381718420400852
complete-iteration_mean0.2381718420400852
complete-iteration_median0.2381718420400852
complete-iteration_min0.2381718420400852
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max0.7516647109419023
distance-from-start_median0.7516647109419023
distance-from-start_min0.7516647109419023
driven_any_max1.3208630269077095
driven_any_mean1.3208630269077095
driven_any_median1.3208630269077095
driven_any_min1.3208630269077095
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09912149208347971
get_duckie_state_mean0.09912149208347971
get_duckie_state_median0.09912149208347971
get_duckie_state_min0.09912149208347971
get_robot_state_max0.003710275743065811
get_robot_state_mean0.003710275743065811
get_robot_state_median0.003710275743065811
get_robot_state_min0.003710275743065811
get_state_dump_max0.02060843677055545
get_state_dump_mean0.02060843677055545
get_state_dump_median0.02060843677055545
get_state_dump_min0.02060843677055545
get_ui_image_max0.015345518181963664
get_ui_image_mean0.015345518181963664
get_ui_image_median0.015345518181963664
get_ui_image_min0.015345518181963664
in-drivable-lane_max4.049999999999994
in-drivable-lane_mean4.049999999999994
in-drivable-lane_median4.049999999999994
in-drivable-lane_min4.049999999999994
per-episodes
details{"d50-ego0": {"driven_any": 1.3208630269077095, "get_ui_image": 0.015345518181963664, "step_physics": 0.07112528347387546, "survival_time": 4.049999999999994, "driven_lanedir": 0.0, "get_state_dump": 0.02060843677055545, "get_robot_state": 0.003710275743065811, "sim_render-ego0": 0.00363505177381562, "get_duckie_state": 0.09912149208347971, "in-drivable-lane": 4.049999999999994, "deviation-heading": 0.0, "agent_compute-ego0": 0.011051576311995343, "complete-iteration": 0.2381718420400852, "set_robot_commands": 0.002309240945955602, "distance-from-start": 0.7516647109419023, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009318502937875143, "sim_compute_performance-ego0": 0.0018529484911662777}}
set_robot_commands_max0.002309240945955602
set_robot_commands_mean0.002309240945955602
set_robot_commands_median0.002309240945955602
set_robot_commands_min0.002309240945955602
sim_compute_performance-ego0_max0.0018529484911662777
sim_compute_performance-ego0_mean0.0018529484911662777
sim_compute_performance-ego0_median0.0018529484911662777
sim_compute_performance-ego0_min0.0018529484911662777
sim_compute_sim_state_max0.009318502937875143
sim_compute_sim_state_mean0.009318502937875143
sim_compute_sim_state_median0.009318502937875143
sim_compute_sim_state_min0.009318502937875143
sim_render-ego0_max0.00363505177381562
sim_render-ego0_mean0.00363505177381562
sim_render-ego0_median0.00363505177381562
sim_render-ego0_min0.00363505177381562
simulation-passed1
step_physics_max0.07112528347387546
step_physics_mean0.07112528347387546
step_physics_median0.07112528347387546
step_physics_min0.07112528347387546
survival_time_max4.049999999999994
survival_time_mean4.049999999999994
survival_time_median4.049999999999994
survival_time_min4.049999999999994
No reset possible
7544514884Juan Ramirezexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-010:01:36
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean0.9689331756629788


other stats
agent_compute-ego0_max0.011989952415548346
agent_compute-ego0_mean0.011989952415548346
agent_compute-ego0_median0.011989952415548346
agent_compute-ego0_min0.011989952415548346
complete-iteration_max0.24919647298833375
complete-iteration_mean0.24919647298833375
complete-iteration_median0.24919647298833375
complete-iteration_min0.24919647298833375
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max0.9689331756629788
distance-from-start_median0.9689331756629788
distance-from-start_min0.9689331756629788
driven_any_max1.4241558529415783
driven_any_mean1.4241558529415783
driven_any_median1.4241558529415783
driven_any_min1.4241558529415783
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.10290236370537872
get_duckie_state_mean0.10290236370537872
get_duckie_state_median0.10290236370537872
get_duckie_state_min0.10290236370537872
get_robot_state_max0.003925031231295678
get_robot_state_mean0.003925031231295678
get_robot_state_median0.003925031231295678
get_robot_state_min0.003925031231295678
get_state_dump_max0.021104843385757938
get_state_dump_mean0.021104843385757938
get_state_dump_median0.021104843385757938
get_state_dump_min0.021104843385757938
get_ui_image_max0.01641792123035718
get_ui_image_mean0.01641792123035718
get_ui_image_median0.01641792123035718
get_ui_image_min0.01641792123035718
in-drivable-lane_max4.599999999999992
in-drivable-lane_mean4.599999999999992
in-drivable-lane_median4.599999999999992
in-drivable-lane_min4.599999999999992
per-episodes
details{"d50-ego0": {"driven_any": 1.4241558529415783, "get_ui_image": 0.01641792123035718, "step_physics": 0.07464918526270056, "survival_time": 4.599999999999992, "driven_lanedir": 0.0, "get_state_dump": 0.021104843385757938, "get_robot_state": 0.003925031231295678, "sim_render-ego0": 0.003869528411537088, "get_duckie_state": 0.10290236370537872, "in-drivable-lane": 4.599999999999992, "deviation-heading": 0.0, "agent_compute-ego0": 0.011989952415548346, "complete-iteration": 0.24919647298833375, "set_robot_commands": 0.0022908641446021294, "distance-from-start": 0.9689331756629788, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009940247381887129, "sim_compute_performance-ego0": 0.002004543940226237}}
set_robot_commands_max0.0022908641446021294
set_robot_commands_mean0.0022908641446021294
set_robot_commands_median0.0022908641446021294
set_robot_commands_min0.0022908641446021294
sim_compute_performance-ego0_max0.002004543940226237
sim_compute_performance-ego0_mean0.002004543940226237
sim_compute_performance-ego0_median0.002004543940226237
sim_compute_performance-ego0_min0.002004543940226237
sim_compute_sim_state_max0.009940247381887129
sim_compute_sim_state_mean0.009940247381887129
sim_compute_sim_state_median0.009940247381887129
sim_compute_sim_state_min0.009940247381887129
sim_render-ego0_max0.003869528411537088
sim_render-ego0_mean0.003869528411537088
sim_render-ego0_median0.003869528411537088
sim_render-ego0_min0.003869528411537088
simulation-passed1
step_physics_max0.07464918526270056
step_physics_mean0.07464918526270056
step_physics_median0.07464918526270056
step_physics_min0.07464918526270056
survival_time_max4.599999999999992
survival_time_mean4.599999999999992
survival_time_median4.599999999999992
survival_time_min4.599999999999992
No reset possible
7544214884Juan Ramirezexercises_braitenbergmooc-BV1sim-4of5successnogpu-production-spot-0-010:02:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.639914751219579


other stats
agent_compute-ego0_max0.011939204656160794
agent_compute-ego0_mean0.011939204656160794
agent_compute-ego0_median0.011939204656160794
agent_compute-ego0_min0.011939204656160794
complete-iteration_max0.25537305917495334
complete-iteration_mean0.25537305917495334
complete-iteration_median0.25537305917495334
complete-iteration_min0.25537305917495334
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.639914751219579
distance-from-start_median2.639914751219579
distance-from-start_min2.639914751219579
driven_any_max3.0196381741238563
driven_any_mean3.0196381741238563
driven_any_median3.0196381741238563
driven_any_min3.0196381741238563
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.10457490193538176
get_duckie_state_mean0.10457490193538176
get_duckie_state_median0.10457490193538176
get_duckie_state_min0.10457490193538176
get_robot_state_max0.004141416305150741
get_robot_state_mean0.004141416305150741
get_robot_state_median0.004141416305150741
get_robot_state_min0.004141416305150741
get_state_dump_max0.02157508257107857
get_state_dump_mean0.02157508257107857
get_state_dump_median0.02157508257107857
get_state_dump_min0.02157508257107857
get_ui_image_max0.01608281563489865
get_ui_image_mean0.01608281563489865
get_ui_image_median0.01608281563489865
get_ui_image_min0.01608281563489865
in-drivable-lane_max7.7499999999999805
in-drivable-lane_mean7.7499999999999805
in-drivable-lane_median7.7499999999999805
in-drivable-lane_min7.7499999999999805
per-episodes
details{"d50-ego0": {"driven_any": 3.0196381741238563, "get_ui_image": 0.01608281563489865, "step_physics": 0.07876791556676228, "survival_time": 7.7499999999999805, "driven_lanedir": 0.0, "get_state_dump": 0.02157508257107857, "get_robot_state": 0.004141416305150741, "sim_render-ego0": 0.004041713017683763, "get_duckie_state": 0.10457490193538176, "in-drivable-lane": 7.7499999999999805, "deviation-heading": 0.0, "agent_compute-ego0": 0.011939204656160794, "complete-iteration": 0.25537305917495334, "set_robot_commands": 0.0023668805758158364, "distance-from-start": 2.639914751219579, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009738631737537874, "sim_compute_performance-ego0": 0.002032911166166648}}
set_robot_commands_max0.0023668805758158364
set_robot_commands_mean0.0023668805758158364
set_robot_commands_median0.0023668805758158364
set_robot_commands_min0.0023668805758158364
sim_compute_performance-ego0_max0.002032911166166648
sim_compute_performance-ego0_mean0.002032911166166648
sim_compute_performance-ego0_median0.002032911166166648
sim_compute_performance-ego0_min0.002032911166166648
sim_compute_sim_state_max0.009738631737537874
sim_compute_sim_state_mean0.009738631737537874
sim_compute_sim_state_median0.009738631737537874
sim_compute_sim_state_min0.009738631737537874
sim_render-ego0_max0.004041713017683763
sim_render-ego0_mean0.004041713017683763
sim_render-ego0_median0.004041713017683763
sim_render-ego0_min0.004041713017683763
simulation-passed1
step_physics_max0.07876791556676228
step_physics_mean0.07876791556676228
step_physics_median0.07876791556676228
step_physics_min0.07876791556676228
survival_time_max7.7499999999999805
survival_time_mean7.7499999999999805
survival_time_median7.7499999999999805
survival_time_min7.7499999999999805
No reset possible
7543414881Cosimo Binitemplate-randomaido-hello-sim-validation370successyesgpu-production-spot-0-010:01:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.01165805621580644
agent_compute-ego0_mean0.01165805621580644
agent_compute-ego0_median0.01165805621580644
agent_compute-ego0_min0.01165805621580644
complete-iteration_max0.14080209623683582
complete-iteration_mean0.14080209623683582
complete-iteration_median0.14080209623683582
complete-iteration_min0.14080209623683582
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.004393089901317249
get_duckie_state_mean0.004393089901317249
get_duckie_state_median0.004393089901317249
get_duckie_state_min0.004393089901317249
get_robot_state_max0.00376074422489513
get_robot_state_mean0.00376074422489513
get_robot_state_median0.00376074422489513
get_robot_state_min0.00376074422489513
get_state_dump_max0.005863883278586648
get_state_dump_mean0.005863883278586648
get_state_dump_median0.005863883278586648
get_state_dump_min0.005863883278586648
get_ui_image_max0.028152958913282913
get_ui_image_mean0.028152958913282913
get_ui_image_median0.028152958913282913
get_ui_image_min0.028152958913282913
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.028152958913282913, "step_physics": 0.0731674541126598, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.005863883278586648, "get_robot_state": 0.00376074422489513, "sim_render-ego0": 0.004185069691051136, "get_duckie_state": 0.004393089901317249, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.01165805621580644, "complete-iteration": 0.14080209623683582, "set_robot_commands": 0.0022562362930991435, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.0051987116987055, "sim_compute_performance-ego0": 0.002079925753853538}}
set_robot_commands_max0.0022562362930991435
set_robot_commands_mean0.0022562362930991435
set_robot_commands_median0.0022562362930991435
set_robot_commands_min0.0022562362930991435
sim_compute_performance-ego0_max0.002079925753853538
sim_compute_performance-ego0_mean0.002079925753853538
sim_compute_performance-ego0_median0.002079925753853538
sim_compute_performance-ego0_min0.002079925753853538
sim_compute_sim_state_max0.0051987116987055
sim_compute_sim_state_mean0.0051987116987055
sim_compute_sim_state_median0.0051987116987055
sim_compute_sim_state_min0.0051987116987055
sim_render-ego0_max0.004185069691051136
sim_render-ego0_mean0.004185069691051136
sim_render-ego0_median0.004185069691051136
sim_render-ego0_min0.004185069691051136
simulation-passed1
step_physics_max0.0731674541126598
step_physics_mean0.0731674541126598
step_physics_median0.0731674541126598
step_physics_min0.0731674541126598
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
7541714878Franz Pucherv9mooc-BV1sim-4of5successnogpu-production-spot-0-010:11:04
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean2.2499718090873033


other stats
agent_compute-ego0_max0.01141424877855998
agent_compute-ego0_mean0.01141424877855998
agent_compute-ego0_median0.01141424877855998
agent_compute-ego0_min0.01141424877855998
complete-iteration_max0.24381025387385208
complete-iteration_mean0.24381025387385208
complete-iteration_median0.24381025387385208
complete-iteration_min0.24381025387385208
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max2.2499718090873033
distance-from-start_median2.2499718090873033
distance-from-start_min2.2499718090873033
driven_any_max22.983868766320914
driven_any_mean22.983868766320914
driven_any_median22.983868766320914
driven_any_min22.983868766320914
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.10236768400937096
get_duckie_state_mean0.10236768400937096
get_duckie_state_median0.10236768400937096
get_duckie_state_min0.10236768400937096
get_robot_state_max0.003842216447231474
get_robot_state_mean0.003842216447231474
get_robot_state_median0.003842216447231474
get_robot_state_min0.003842216447231474
get_state_dump_max0.020458172401917368
get_state_dump_mean0.020458172401917368
get_state_dump_median0.020458172401917368
get_state_dump_min0.020458172401917368
get_ui_image_max0.01522519110045167
get_ui_image_mean0.01522519110045167
get_ui_image_median0.01522519110045167
get_ui_image_min0.01522519110045167
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d50-ego0": {"driven_any": 22.983868766320914, "get_ui_image": 0.01522519110045167, "step_physics": 0.07125612718675853, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.020458172401917368, "get_robot_state": 0.003842216447231474, "sim_render-ego0": 0.003836774309906336, "get_duckie_state": 0.10236768400937096, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.01141424877855998, "complete-iteration": 0.24381025387385208, "set_robot_commands": 0.002267784520450182, "distance-from-start": 2.2499718090873033, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.011087490656691526, "sim_compute_performance-ego0": 0.0019579690064518377}}
set_robot_commands_max0.002267784520450182
set_robot_commands_mean0.002267784520450182
set_robot_commands_median0.002267784520450182
set_robot_commands_min0.002267784520450182
sim_compute_performance-ego0_max0.0019579690064518377
sim_compute_performance-ego0_mean0.0019579690064518377
sim_compute_performance-ego0_median0.0019579690064518377
sim_compute_performance-ego0_min0.0019579690064518377
sim_compute_sim_state_max0.011087490656691526
sim_compute_sim_state_mean0.011087490656691526
sim_compute_sim_state_median0.011087490656691526
sim_compute_sim_state_min0.011087490656691526
sim_render-ego0_max0.003836774309906336
sim_render-ego0_mean0.003836774309906336
sim_render-ego0_median0.003836774309906336
sim_render-ego0_min0.003836774309906336
simulation-passed1
step_physics_max0.07125612718675853
step_physics_mean0.07125612718675853
step_physics_median0.07125612718675853
step_physics_min0.07125612718675853
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7541514877Nick Conwayexercises_braitenbergmooc-BV1sim-3of5successnogpu-production-spot-0-010:09:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.029004960358704


other stats
agent_compute-ego0_max0.01191208642487919
agent_compute-ego0_mean0.01191208642487919
agent_compute-ego0_median0.01191208642487919
agent_compute-ego0_min0.01191208642487919
complete-iteration_max0.2072136255227755
complete-iteration_mean0.2072136255227755
complete-iteration_median0.2072136255227755
complete-iteration_min0.2072136255227755
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.029004960358704
distance-from-start_median5.029004960358704
distance-from-start_min5.029004960358704
driven_any_max5.391737915919459
driven_any_mean5.391737915919459
driven_any_median5.391737915919459
driven_any_min5.391737915919459
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.06624834960346715
get_duckie_state_mean0.06624834960346715
get_duckie_state_median0.06624834960346715
get_duckie_state_min0.06624834960346715
get_robot_state_max0.004169469074246091
get_robot_state_mean0.004169469074246091
get_robot_state_median0.004169469074246091
get_robot_state_min0.004169469074246091
get_state_dump_max0.015005239737619467
get_state_dump_mean0.015005239737619467
get_state_dump_median0.015005239737619467
get_state_dump_min0.015005239737619467
get_ui_image_max0.014838568673939827
get_ui_image_mean0.014838568673939827
get_ui_image_median0.014838568673939827
get_ui_image_min0.014838568673939827
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d30-ego0": {"driven_any": 5.391737915919459, "get_ui_image": 0.014838568673939827, "step_physics": 0.07788267500890879, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.015005239737619467, "get_robot_state": 0.004169469074246091, "sim_render-ego0": 0.00405812342895457, "get_duckie_state": 0.06624834960346715, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.01191208642487919, "complete-iteration": 0.2072136255227755, "set_robot_commands": 0.0024362384627800403, "distance-from-start": 5.029004960358704, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008450972646797427, "sim_compute_performance-ego0": 0.00210915120019206}}
set_robot_commands_max0.0024362384627800403
set_robot_commands_mean0.0024362384627800403
set_robot_commands_median0.0024362384627800403
set_robot_commands_min0.0024362384627800403
sim_compute_performance-ego0_max0.00210915120019206
sim_compute_performance-ego0_mean0.00210915120019206
sim_compute_performance-ego0_median0.00210915120019206
sim_compute_performance-ego0_min0.00210915120019206
sim_compute_sim_state_max0.008450972646797427
sim_compute_sim_state_mean0.008450972646797427
sim_compute_sim_state_median0.008450972646797427
sim_compute_sim_state_min0.008450972646797427
sim_render-ego0_max0.00405812342895457
sim_render-ego0_mean0.00405812342895457
sim_render-ego0_median0.00405812342895457
sim_render-ego0_min0.00405812342895457
simulation-passed1
step_physics_max0.07788267500890879
step_physics_mean0.07788267500890879
step_physics_median0.07788267500890879
step_physics_min0.07788267500890879
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7540314875Dohyeong Kimexercises_braitenbergmooc-BV1sim-3of5successnogpu-production-spot-0-010:04:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.531992648167123


other stats
agent_compute-ego0_max0.011842997933206287
agent_compute-ego0_mean0.011842997933206287
agent_compute-ego0_median0.011842997933206287
agent_compute-ego0_min0.011842997933206287
complete-iteration_max0.2034287124510236
complete-iteration_mean0.2034287124510236
complete-iteration_median0.2034287124510236
complete-iteration_min0.2034287124510236
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.531992648167123
distance-from-start_median5.531992648167123
distance-from-start_min5.531992648167123
driven_any_max5.738887789469651
driven_any_mean5.738887789469651
driven_any_median5.738887789469651
driven_any_min5.738887789469651
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.06455281292378662
get_duckie_state_mean0.06455281292378662
get_duckie_state_median0.06455281292378662
get_duckie_state_min0.06455281292378662
get_robot_state_max0.003975199301715805
get_robot_state_mean0.003975199301715805
get_robot_state_median0.003975199301715805
get_robot_state_min0.003975199301715805
get_state_dump_max0.01487306785969599
get_state_dump_mean0.01487306785969599
get_state_dump_median0.01487306785969599
get_state_dump_min0.01487306785969599
get_ui_image_max0.014585278294829704
get_ui_image_mean0.014585278294829704
get_ui_image_median0.014585278294829704
get_ui_image_min0.014585278294829704
in-drivable-lane_max24.650000000000215
in-drivable-lane_mean24.650000000000215
in-drivable-lane_median24.650000000000215
in-drivable-lane_min24.650000000000215
per-episodes
details{"d30-ego0": {"driven_any": 5.738887789469651, "get_ui_image": 0.014585278294829704, "step_physics": 0.07635521599155688, "survival_time": 24.650000000000215, "driven_lanedir": 0.0, "get_state_dump": 0.01487306785969599, "get_robot_state": 0.003975199301715805, "sim_render-ego0": 0.003984555059116379, "get_duckie_state": 0.06455281292378662, "in-drivable-lane": 24.650000000000215, "deviation-heading": 0.0, "agent_compute-ego0": 0.011842997933206287, "complete-iteration": 0.2034287124510236, "set_robot_commands": 0.002334868377036894, "distance-from-start": 5.531992648167123, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.00877725305827523, "sim_compute_performance-ego0": 0.0020449745510271204}}
set_robot_commands_max0.002334868377036894
set_robot_commands_mean0.002334868377036894
set_robot_commands_median0.002334868377036894
set_robot_commands_min0.002334868377036894
sim_compute_performance-ego0_max0.0020449745510271204
sim_compute_performance-ego0_mean0.0020449745510271204
sim_compute_performance-ego0_median0.0020449745510271204
sim_compute_performance-ego0_min0.0020449745510271204
sim_compute_sim_state_max0.00877725305827523
sim_compute_sim_state_mean0.00877725305827523
sim_compute_sim_state_median0.00877725305827523
sim_compute_sim_state_min0.00877725305827523
sim_render-ego0_max0.003984555059116379
sim_render-ego0_mean0.003984555059116379
sim_render-ego0_median0.003984555059116379
sim_render-ego0_min0.003984555059116379
simulation-passed1
step_physics_max0.07635521599155688
step_physics_mean0.07635521599155688
step_physics_median0.07635521599155688
step_physics_min0.07635521599155688
survival_time_max24.650000000000215
survival_time_mean24.650000000000215
survival_time_median24.650000000000215
survival_time_min24.650000000000215
No reset possible
7539714873Marcus Ongexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-010:10:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean1.8665845647998005


other stats
agent_compute-ego0_max0.01104552501643528
agent_compute-ego0_mean0.01104552501643528
agent_compute-ego0_median0.01104552501643528
agent_compute-ego0_min0.01104552501643528
complete-iteration_max0.21968249178845917
complete-iteration_mean0.21968249178845917
complete-iteration_median0.21968249178845917
complete-iteration_min0.21968249178845917
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max1.8665845647998005
distance-from-start_median1.8665845647998005
distance-from-start_min1.8665845647998005
driven_any_max5.351354926465601
driven_any_mean5.351354926465601
driven_any_median5.351354926465601
driven_any_min5.351354926465601
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.08825384568016693
get_duckie_state_mean0.08825384568016693
get_duckie_state_median0.08825384568016693
get_duckie_state_min0.08825384568016693
get_robot_state_max0.003596292943581256
get_robot_state_mean0.003596292943581256
get_robot_state_median0.003596292943581256
get_robot_state_min0.003596292943581256
get_state_dump_max0.018569281059538294
get_state_dump_mean0.018569281059538294
get_state_dump_median0.018569281059538294
get_state_dump_min0.018569281059538294
get_ui_image_max0.014906986666956511
get_ui_image_mean0.014906986666956511
get_ui_image_median0.014906986666956511
get_ui_image_min0.014906986666956511
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d45-ego0": {"driven_any": 5.351354926465601, "get_ui_image": 0.014906986666956511, "step_physics": 0.06496319385690554, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.018569281059538294, "get_robot_state": 0.003596292943581256, "sim_render-ego0": 0.0037055366938556857, "get_duckie_state": 0.08825384568016693, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.01104552501643528, "complete-iteration": 0.21968249178845917, "set_robot_commands": 0.002153627679905824, "distance-from-start": 1.8665845647998005, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.010528264494363115, "sim_compute_performance-ego0": 0.0018729804258957989}}
set_robot_commands_max0.002153627679905824
set_robot_commands_mean0.002153627679905824
set_robot_commands_median0.002153627679905824
set_robot_commands_min0.002153627679905824
sim_compute_performance-ego0_max0.0018729804258957989
sim_compute_performance-ego0_mean0.0018729804258957989
sim_compute_performance-ego0_median0.0018729804258957989
sim_compute_performance-ego0_min0.0018729804258957989
sim_compute_sim_state_max0.010528264494363115
sim_compute_sim_state_mean0.010528264494363115
sim_compute_sim_state_median0.010528264494363115
sim_compute_sim_state_min0.010528264494363115
sim_render-ego0_max0.0037055366938556857
sim_render-ego0_mean0.0037055366938556857
sim_render-ego0_median0.0037055366938556857
sim_render-ego0_min0.0037055366938556857
simulation-passed1
step_physics_max0.06496319385690554
step_physics_mean0.06496319385690554
step_physics_median0.06496319385690554
step_physics_min0.06496319385690554
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7539114872Bruno Maitreexercises_braitenbergmooc-BV1sim-1of5successnogpu-production-spot-0-010:05:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.207894565707058


other stats
agent_compute-ego0_max0.013567618325225309
agent_compute-ego0_mean0.013567618325225309
agent_compute-ego0_median0.013567618325225309
agent_compute-ego0_min0.013567618325225309
complete-iteration_max0.27401038775077236
complete-iteration_mean0.27401038775077236
complete-iteration_median0.27401038775077236
complete-iteration_min0.27401038775077236
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.207894565707058
distance-from-start_median5.207894565707058
distance-from-start_min5.207894565707058
driven_any_max5.294661204552162
driven_any_mean5.294661204552162
driven_any_median5.294661204552162
driven_any_min5.294661204552162
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.1245725888472337
get_duckie_state_mean0.1245725888472337
get_duckie_state_median0.1245725888472337
get_duckie_state_min0.1245725888472337
get_robot_state_max0.003964142412201971
get_robot_state_mean0.003964142412201971
get_robot_state_median0.003964142412201971
get_robot_state_min0.003964142412201971
get_state_dump_max0.0247942550569518
get_state_dump_mean0.0247942550569518
get_state_dump_median0.0247942550569518
get_state_dump_min0.0247942550569518
get_ui_image_max0.016848404183347
get_ui_image_mean0.016848404183347
get_ui_image_median0.016848404183347
get_ui_image_min0.016848404183347
in-drivable-lane_max23.350000000000197
in-drivable-lane_mean23.350000000000197
in-drivable-lane_median23.350000000000197
in-drivable-lane_min23.350000000000197
per-episodes
details{"d60-ego0": {"driven_any": 5.294661204552162, "get_ui_image": 0.016848404183347, "step_physics": 0.07285277252523308, "survival_time": 23.350000000000197, "driven_lanedir": 0.0, "get_state_dump": 0.0247942550569518, "get_robot_state": 0.003964142412201971, "sim_render-ego0": 0.0039300302154997475, "get_duckie_state": 0.1245725888472337, "in-drivable-lane": 23.350000000000197, "deviation-heading": 0.0, "agent_compute-ego0": 0.013567618325225309, "complete-iteration": 0.27401038775077236, "set_robot_commands": 0.002371154279790373, "distance-from-start": 5.207894565707058, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.00904673439824683, "sim_compute_performance-ego0": 0.0019683099200582914}}
set_robot_commands_max0.002371154279790373
set_robot_commands_mean0.002371154279790373
set_robot_commands_median0.002371154279790373
set_robot_commands_min0.002371154279790373
sim_compute_performance-ego0_max0.0019683099200582914
sim_compute_performance-ego0_mean0.0019683099200582914
sim_compute_performance-ego0_median0.0019683099200582914
sim_compute_performance-ego0_min0.0019683099200582914
sim_compute_sim_state_max0.00904673439824683
sim_compute_sim_state_mean0.00904673439824683
sim_compute_sim_state_median0.00904673439824683
sim_compute_sim_state_min0.00904673439824683
sim_render-ego0_max0.0039300302154997475
sim_render-ego0_mean0.0039300302154997475
sim_render-ego0_median0.0039300302154997475
sim_render-ego0_min0.0039300302154997475
simulation-passed1
step_physics_max0.07285277252523308
step_physics_mean0.07285277252523308
step_physics_median0.07285277252523308
step_physics_min0.07285277252523308
survival_time_max23.350000000000197
survival_time_mean23.350000000000197
survival_time_median23.350000000000197
survival_time_min23.350000000000197
No reset possible
7538514871Andrew Fletcherexercises_braitenbergmooc-BV1sim-3of5successnogpu-production-spot-0-010:09:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean5.375349887272795


other stats
agent_compute-ego0_max0.011805532179109063
agent_compute-ego0_mean0.011805532179109063
agent_compute-ego0_median0.011805532179109063
agent_compute-ego0_min0.011805532179109063
complete-iteration_max0.19454792536466345
complete-iteration_mean0.19454792536466345
complete-iteration_median0.19454792536466345
complete-iteration_min0.19454792536466345
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max5.375349887272795
distance-from-start_median5.375349887272795
distance-from-start_min5.375349887272795
driven_any_max5.378399719786445
driven_any_mean5.378399719786445
driven_any_median5.378399719786445
driven_any_min5.378399719786445
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.061322698982232415
get_duckie_state_mean0.061322698982232415
get_duckie_state_median0.061322698982232415
get_duckie_state_min0.061322698982232415
get_robot_state_max0.003757346579673189
get_robot_state_mean0.003757346579673189
get_robot_state_median0.003757346579673189
get_robot_state_min0.003757346579673189
get_state_dump_max0.014496278604004008
get_state_dump_mean0.014496278604004008
get_state_dump_median0.014496278604004008
get_state_dump_min0.014496278604004008
get_ui_image_max0.01443326761085326
get_ui_image_mean0.01443326761085326
get_ui_image_median0.01443326761085326
get_ui_image_min0.01443326761085326
in-drivable-lane_max59.99999999999873
in-drivable-lane_mean59.99999999999873
in-drivable-lane_median59.99999999999873
in-drivable-lane_min59.99999999999873
per-episodes
details{"d30-ego0": {"driven_any": 5.378399719786445, "get_ui_image": 0.01443326761085326, "step_physics": 0.07178445660402932, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.014496278604004008, "get_robot_state": 0.003757346579673189, "sim_render-ego0": 0.00384966816929953, "get_duckie_state": 0.061322698982232415, "in-drivable-lane": 59.99999999999873, "deviation-heading": 0.0, "agent_compute-ego0": 0.011805532179109063, "complete-iteration": 0.19454792536466345, "set_robot_commands": 0.002261484393867029, "distance-from-start": 5.375349887272795, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.008812247863121573, "sim_compute_performance-ego0": 0.001928967103473749}}
set_robot_commands_max0.002261484393867029
set_robot_commands_mean0.002261484393867029
set_robot_commands_median0.002261484393867029
set_robot_commands_min0.002261484393867029
sim_compute_performance-ego0_max0.001928967103473749
sim_compute_performance-ego0_mean0.001928967103473749
sim_compute_performance-ego0_median0.001928967103473749
sim_compute_performance-ego0_min0.001928967103473749
sim_compute_sim_state_max0.008812247863121573
sim_compute_sim_state_mean0.008812247863121573
sim_compute_sim_state_median0.008812247863121573
sim_compute_sim_state_min0.008812247863121573
sim_render-ego0_max0.00384966816929953
sim_render-ego0_mean0.00384966816929953
sim_render-ego0_median0.00384966816929953
sim_render-ego0_min0.00384966816929953
simulation-passed1
step_physics_max0.07178445660402932
step_physics_mean0.07178445660402932
step_physics_median0.07178445660402932
step_physics_min0.07178445660402932
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_median59.99999999999873
survival_time_min59.99999999999873
No reset possible
7538213686Anthony CourchesneΒ πŸ‡¨πŸ‡¦Real100FHaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:02:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.8897463109660857
survival_time_median13.550000000000058
deviation-center-line_median0.47698453706957816
in-drivable-lane_median7.80000000000007


other stats
agent_compute-ego0_max0.04153358410386478
agent_compute-ego0_mean0.04153358410386478
agent_compute-ego0_median0.04153358410386478
agent_compute-ego0_min0.04153358410386478
complete-iteration_max0.1654694378376007
complete-iteration_mean0.1654694378376007
complete-iteration_median0.1654694378376007
complete-iteration_min0.1654694378376007
deviation-center-line_max0.47698453706957816
deviation-center-line_mean0.47698453706957816
deviation-center-line_min0.47698453706957816
deviation-heading_max2.969988309142334
deviation-heading_mean2.969988309142334
deviation-heading_median2.969988309142334
deviation-heading_min2.969988309142334
distance-from-start_max1.9182039976286849
distance-from-start_mean1.9182039976286849
distance-from-start_median1.9182039976286849
distance-from-start_min1.9182039976286849
driven_any_max2.566053610110275
driven_any_mean2.566053610110275
driven_any_median2.566053610110275
driven_any_min2.566053610110275
driven_lanedir_consec_max0.8897463109660857
driven_lanedir_consec_mean0.8897463109660857
driven_lanedir_consec_min0.8897463109660857
driven_lanedir_max0.8897463109660857
driven_lanedir_mean0.8897463109660857
driven_lanedir_median0.8897463109660857
driven_lanedir_min0.8897463109660857
get_duckie_state_max1.4050918466904584e-06
get_duckie_state_mean1.4050918466904584e-06
get_duckie_state_median1.4050918466904584e-06
get_duckie_state_min1.4050918466904584e-06
get_robot_state_max0.0036205614314359777
get_robot_state_mean0.0036205614314359777
get_robot_state_median0.0036205614314359777
get_robot_state_min0.0036205614314359777
get_state_dump_max0.004620034028502072
get_state_dump_mean0.004620034028502072
get_state_dump_median0.004620034028502072
get_state_dump_min0.004620034028502072
get_ui_image_max0.01885116889196284
get_ui_image_mean0.01885116889196284
get_ui_image_median0.01885116889196284
get_ui_image_min0.01885116889196284
in-drivable-lane_max7.80000000000007
in-drivable-lane_mean7.80000000000007
in-drivable-lane_min7.80000000000007
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.566053610110275, "get_ui_image": 0.01885116889196284, "step_physics": 0.08039175236926359, "survival_time": 13.550000000000058, "driven_lanedir": 0.8897463109660857, "get_state_dump": 0.004620034028502072, "get_robot_state": 0.0036205614314359777, "sim_render-ego0": 0.003959149122238159, "get_duckie_state": 1.4050918466904584e-06, "in-drivable-lane": 7.80000000000007, "deviation-heading": 2.969988309142334, "agent_compute-ego0": 0.04153358410386478, "complete-iteration": 0.1654694378376007, "set_robot_commands": 0.002264311208444483, "distance-from-start": 1.9182039976286849, "deviation-center-line": 0.47698453706957816, "driven_lanedir_consec": 0.8897463109660857, "sim_compute_sim_state": 0.008190908852745505, "sim_compute_performance-ego0": 0.0019490850322386795}}
set_robot_commands_max0.002264311208444483
set_robot_commands_mean0.002264311208444483
set_robot_commands_median0.002264311208444483
set_robot_commands_min0.002264311208444483
sim_compute_performance-ego0_max0.0019490850322386795
sim_compute_performance-ego0_mean0.0019490850322386795
sim_compute_performance-ego0_median0.0019490850322386795
sim_compute_performance-ego0_min0.0019490850322386795
sim_compute_sim_state_max0.008190908852745505
sim_compute_sim_state_mean0.008190908852745505
sim_compute_sim_state_median0.008190908852745505
sim_compute_sim_state_min0.008190908852745505
sim_render-ego0_max0.003959149122238159
sim_render-ego0_mean0.003959149122238159
sim_render-ego0_median0.003959149122238159
sim_render-ego0_min0.003959149122238159
simulation-passed1
step_physics_max0.08039175236926359
step_physics_mean0.08039175236926359
step_physics_median0.08039175236926359
step_physics_min0.08039175236926359
survival_time_max13.550000000000058
survival_time_mean13.550000000000058
survival_time_min13.550000000000058
No reset possible
7536913692Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:09:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.4995991377443141
survival_time_median59.39999999999876
deviation-center-line_median2.0708294940220178
in-drivable-lane_median37.549999999999


other stats
agent_compute-ego0_max0.014344158846355468
agent_compute-ego0_mean0.014344158846355468
agent_compute-ego0_median0.014344158846355468
agent_compute-ego0_min0.014344158846355468
complete-iteration_max0.2312896181496379
complete-iteration_mean0.2312896181496379
complete-iteration_median0.2312896181496379
complete-iteration_min0.2312896181496379
deviation-center-line_max2.0708294940220178
deviation-center-line_mean2.0708294940220178
deviation-center-line_min2.0708294940220178
deviation-heading_max17.932295562616137
deviation-heading_mean17.932295562616137
deviation-heading_median17.932295562616137
deviation-heading_min17.932295562616137
distance-from-start_max0.3778555960939288
distance-from-start_mean0.3778555960939288
distance-from-start_median0.3778555960939288
distance-from-start_min0.3778555960939288
driven_any_max3.724854611036119
driven_any_mean3.724854611036119
driven_any_median3.724854611036119
driven_any_min3.724854611036119
driven_lanedir_consec_max0.4995991377443141
driven_lanedir_consec_mean0.4995991377443141
driven_lanedir_consec_min0.4995991377443141
driven_lanedir_max0.7275768389470647
driven_lanedir_mean0.7275768389470647
driven_lanedir_median0.7275768389470647
driven_lanedir_min0.7275768389470647
get_duckie_state_max1.5939354796285285e-06
get_duckie_state_mean1.5939354796285285e-06
get_duckie_state_median1.5939354796285285e-06
get_duckie_state_min1.5939354796285285e-06
get_robot_state_max0.003850185941306356
get_robot_state_mean0.003850185941306356
get_robot_state_median0.003850185941306356
get_robot_state_min0.003850185941306356
get_state_dump_max0.005101950254833327
get_state_dump_mean0.005101950254833327
get_state_dump_median0.005101950254833327
get_state_dump_min0.005101950254833327
get_ui_image_max0.02295280946614463
get_ui_image_mean0.02295280946614463
get_ui_image_median0.02295280946614463
get_ui_image_min0.02295280946614463
in-drivable-lane_max37.549999999999
in-drivable-lane_mean37.549999999999
in-drivable-lane_min37.549999999999
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 3.724854611036119, "get_ui_image": 0.02295280946614463, "step_physics": 0.16719750419597448, "survival_time": 59.39999999999876, "driven_lanedir": 0.7275768389470647, "get_state_dump": 0.005101950254833327, "get_robot_state": 0.003850185941306356, "sim_render-ego0": 0.004105893777537686, "get_duckie_state": 1.5939354796285285e-06, "in-drivable-lane": 37.549999999999, "deviation-heading": 17.932295562616137, "agent_compute-ego0": 0.014344158846355468, "complete-iteration": 0.2312896181496379, "set_robot_commands": 0.002431267344520511, "distance-from-start": 0.3778555960939288, "deviation-center-line": 2.0708294940220178, "driven_lanedir_consec": 0.4995991377443141, "sim_compute_sim_state": 0.009148027237169476, "sim_compute_performance-ego0": 0.002060349595155708}}
set_robot_commands_max0.002431267344520511
set_robot_commands_mean0.002431267344520511
set_robot_commands_median0.002431267344520511
set_robot_commands_min0.002431267344520511
sim_compute_performance-ego0_max0.002060349595155708
sim_compute_performance-ego0_mean0.002060349595155708
sim_compute_performance-ego0_median0.002060349595155708
sim_compute_performance-ego0_min0.002060349595155708
sim_compute_sim_state_max0.009148027237169476
sim_compute_sim_state_mean0.009148027237169476
sim_compute_sim_state_median0.009148027237169476
sim_compute_sim_state_min0.009148027237169476
sim_render-ego0_max0.004105893777537686
sim_render-ego0_mean0.004105893777537686
sim_render-ego0_median0.004105893777537686
sim_render-ego0_min0.004105893777537686
simulation-passed1
step_physics_max0.16719750419597448
step_physics_mean0.16719750419597448
step_physics_median0.16719750419597448
step_physics_min0.16719750419597448
survival_time_max59.39999999999876
survival_time_mean59.39999999999876
survival_time_min59.39999999999876
No reset possible
7536513694Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:01:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.12175718439201956
survival_time_median2.3999999999999995
deviation-center-line_median0.05984709198839215
in-drivable-lane_median1.4499999999999993


other stats
agent_compute-ego0_max0.05414061157070861
agent_compute-ego0_mean0.05414061157070861
agent_compute-ego0_median0.05414061157070861
agent_compute-ego0_min0.05414061157070861
complete-iteration_max0.1857385051493742
complete-iteration_mean0.1857385051493742
complete-iteration_median0.1857385051493742
complete-iteration_min0.1857385051493742
deviation-center-line_max0.05984709198839215
deviation-center-line_mean0.05984709198839215
deviation-center-line_min0.05984709198839215
deviation-heading_max0.5657694427499156
deviation-heading_mean0.5657694427499156
deviation-heading_median0.5657694427499156
deviation-heading_min0.5657694427499156
distance-from-start_max0.3463386337198754
distance-from-start_mean0.3463386337198754
distance-from-start_median0.3463386337198754
distance-from-start_min0.3463386337198754
driven_any_max0.3620278530934428
driven_any_mean0.3620278530934428
driven_any_median0.3620278530934428
driven_any_min0.3620278530934428
driven_lanedir_consec_max0.12175718439201956
driven_lanedir_consec_mean0.12175718439201956
driven_lanedir_consec_min0.12175718439201956
driven_lanedir_max0.12175718439201956
driven_lanedir_mean0.12175718439201956
driven_lanedir_median0.12175718439201956
driven_lanedir_min0.12175718439201956
get_duckie_state_max1.1726301543566643e-06
get_duckie_state_mean1.1726301543566643e-06
get_duckie_state_median1.1726301543566643e-06
get_duckie_state_min1.1726301543566643e-06
get_robot_state_max0.003567559378487723
get_robot_state_mean0.003567559378487723
get_robot_state_median0.003567559378487723
get_robot_state_min0.003567559378487723
get_state_dump_max0.0045231751033238
get_state_dump_mean0.0045231751033238
get_state_dump_median0.0045231751033238
get_state_dump_min0.0045231751033238
get_ui_image_max0.019275889104726364
get_ui_image_mean0.019275889104726364
get_ui_image_median0.019275889104726364
get_ui_image_min0.019275889104726364
in-drivable-lane_max1.4499999999999993
in-drivable-lane_mean1.4499999999999993
in-drivable-lane_min1.4499999999999993
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 0.3620278530934428, "get_ui_image": 0.019275889104726364, "step_physics": 0.08896700216799366, "survival_time": 2.3999999999999995, "driven_lanedir": 0.12175718439201956, "get_state_dump": 0.0045231751033238, "get_robot_state": 0.003567559378487723, "sim_render-ego0": 0.003897336064552774, "get_duckie_state": 1.1726301543566643e-06, "in-drivable-lane": 1.4499999999999993, "deviation-heading": 0.5657694427499156, "agent_compute-ego0": 0.05414061157070861, "complete-iteration": 0.1857385051493742, "set_robot_commands": 0.0021481319349639268, "distance-from-start": 0.3463386337198754, "deviation-center-line": 0.05984709198839215, "driven_lanedir_consec": 0.12175718439201956, "sim_compute_sim_state": 0.007253734432921118, "sim_compute_performance-ego0": 0.0018894915678063216}}
set_robot_commands_max0.0021481319349639268
set_robot_commands_mean0.0021481319349639268
set_robot_commands_median0.0021481319349639268
set_robot_commands_min0.0021481319349639268
sim_compute_performance-ego0_max0.0018894915678063216
sim_compute_performance-ego0_mean0.0018894915678063216
sim_compute_performance-ego0_median0.0018894915678063216
sim_compute_performance-ego0_min0.0018894915678063216
sim_compute_sim_state_max0.007253734432921118
sim_compute_sim_state_mean0.007253734432921118
sim_compute_sim_state_median0.007253734432921118
sim_compute_sim_state_min0.007253734432921118
sim_render-ego0_max0.003897336064552774
sim_render-ego0_mean0.003897336064552774
sim_render-ego0_median0.003897336064552774
sim_render-ego0_min0.003897336064552774
simulation-passed1
step_physics_max0.08896700216799366
step_physics_mean0.08896700216799366
step_physics_median0.08896700216799366
step_physics_min0.08896700216799366
survival_time_max2.3999999999999995
survival_time_mean2.3999999999999995
survival_time_min2.3999999999999995
No reset possible
7536313694Samuel Alexandertemplate-tensorflowaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:01:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.11935424826039598
survival_time_median2.5999999999999988
deviation-center-line_median0.0614662572149956
in-drivable-lane_median1.6499999999999986


other stats
agent_compute-ego0_max0.05660129493137576
agent_compute-ego0_mean0.05660129493137576
agent_compute-ego0_median0.05660129493137576
agent_compute-ego0_min0.05660129493137576
complete-iteration_max0.20434052989167983
complete-iteration_mean0.20434052989167983
complete-iteration_median0.20434052989167983
complete-iteration_min0.20434052989167983
deviation-center-line_max0.0614662572149956
deviation-center-line_mean0.0614662572149956
deviation-center-line_min0.0614662572149956
deviation-heading_max0.5755984896728548
deviation-heading_mean0.5755984896728548
deviation-heading_median0.5755984896728548
deviation-heading_min0.5755984896728548
distance-from-start_max0.3280197490067922
distance-from-start_mean0.3280197490067922
distance-from-start_median0.3280197490067922
distance-from-start_min0.3280197490067922
driven_any_max0.3475374534465107
driven_any_mean0.3475374534465107
driven_any_median0.3475374534465107
driven_any_min0.3475374534465107
driven_lanedir_consec_max0.11935424826039598
driven_lanedir_consec_mean0.11935424826039598
driven_lanedir_consec_min0.11935424826039598
driven_lanedir_max0.11935424826039598
driven_lanedir_mean0.11935424826039598
driven_lanedir_median0.11935424826039598
driven_lanedir_min0.11935424826039598
get_duckie_state_max2.226739559533461e-06
get_duckie_state_mean2.226739559533461e-06
get_duckie_state_median2.226739559533461e-06
get_duckie_state_min2.226739559533461e-06
get_robot_state_max0.003844400621810049
get_robot_state_mean0.003844400621810049
get_robot_state_median0.003844400621810049
get_robot_state_min0.003844400621810049
get_state_dump_max0.005164434325020268
get_state_dump_mean0.005164434325020268
get_state_dump_median0.005164434325020268
get_state_dump_min0.005164434325020268
get_ui_image_max0.020021299146256357
get_ui_image_mean0.020021299146256357
get_ui_image_median0.020021299146256357
get_ui_image_min0.020021299146256357
in-drivable-lane_max1.6499999999999986
in-drivable-lane_mean1.6499999999999986
in-drivable-lane_min1.6499999999999986
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 0.3475374534465107, "get_ui_image": 0.020021299146256357, "step_physics": 0.1019287334298188, "survival_time": 2.5999999999999988, "driven_lanedir": 0.11935424826039598, "get_state_dump": 0.005164434325020268, "get_robot_state": 0.003844400621810049, "sim_render-ego0": 0.004317980892253372, "get_duckie_state": 2.226739559533461e-06, "in-drivable-lane": 1.6499999999999986, "deviation-heading": 0.5755984896728548, "agent_compute-ego0": 0.05660129493137576, "complete-iteration": 0.20434052989167983, "set_robot_commands": 0.002349965977218916, "distance-from-start": 0.3280197490067922, "deviation-center-line": 0.0614662572149956, "driven_lanedir_consec": 0.11935424826039598, "sim_compute_sim_state": 0.007954462519231832, "sim_compute_performance-ego0": 0.002064740882729584}}
set_robot_commands_max0.002349965977218916
set_robot_commands_mean0.002349965977218916
set_robot_commands_median0.002349965977218916
set_robot_commands_min0.002349965977218916
sim_compute_performance-ego0_max0.002064740882729584
sim_compute_performance-ego0_mean0.002064740882729584
sim_compute_performance-ego0_median0.002064740882729584
sim_compute_performance-ego0_min0.002064740882729584
sim_compute_sim_state_max0.007954462519231832
sim_compute_sim_state_mean0.007954462519231832
sim_compute_sim_state_median0.007954462519231832
sim_compute_sim_state_min0.007954462519231832
sim_render-ego0_max0.004317980892253372
sim_render-ego0_mean0.004317980892253372
sim_render-ego0_median0.004317980892253372
sim_render-ego0_min0.004317980892253372
simulation-passed1
step_physics_max0.1019287334298188
step_physics_mean0.1019287334298188
step_physics_median0.1019287334298188
step_physics_min0.1019287334298188
survival_time_max2.5999999999999988
survival_time_mean2.5999999999999988
survival_time_min2.5999999999999988
No reset possible
7535213697Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:07:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.8622290679152014
survival_time_median45.29999999999957
deviation-center-line_median0.9237150280236306
in-drivable-lane_median35.149999999999494


other stats
agent_compute-ego0_max0.01465374973932201
agent_compute-ego0_mean0.01465374973932201
agent_compute-ego0_median0.01465374973932201
agent_compute-ego0_min0.01465374973932201
complete-iteration_max0.20427126448104505
complete-iteration_mean0.20427126448104505
complete-iteration_median0.20427126448104505
complete-iteration_min0.20427126448104505
deviation-center-line_max0.9237150280236306
deviation-center-line_mean0.9237150280236306
deviation-center-line_min0.9237150280236306
deviation-heading_max7.138393071698855
deviation-heading_mean7.138393071698855
deviation-heading_median7.138393071698855
deviation-heading_min7.138393071698855
distance-from-start_max0.6383921825542074
distance-from-start_mean0.6383921825542074
distance-from-start_median0.6383921825542074
distance-from-start_min0.6383921825542074
driven_any_max5.914625724734941
driven_any_mean5.914625724734941
driven_any_median5.914625724734941
driven_any_min5.914625724734941
driven_lanedir_consec_max0.8622290679152014
driven_lanedir_consec_mean0.8622290679152014
driven_lanedir_consec_min0.8622290679152014
driven_lanedir_max0.8622290679152014
driven_lanedir_mean0.8622290679152014
driven_lanedir_median0.8622290679152014
driven_lanedir_min0.8622290679152014
get_duckie_state_max1.2720038635859547e-06
get_duckie_state_mean1.2720038635859547e-06
get_duckie_state_median1.2720038635859547e-06
get_duckie_state_min1.2720038635859547e-06
get_robot_state_max0.003768292651023991
get_robot_state_mean0.003768292651023991
get_robot_state_median0.003768292651023991
get_robot_state_min0.003768292651023991
get_state_dump_max0.004811414415807419
get_state_dump_mean0.004811414415807419
get_state_dump_median0.004811414415807419
get_state_dump_min0.004811414415807419
get_ui_image_max0.01983006234468673
get_ui_image_mean0.01983006234468673
get_ui_image_median0.01983006234468673
get_ui_image_min0.01983006234468673
in-drivable-lane_max35.149999999999494
in-drivable-lane_mean35.149999999999494
in-drivable-lane_min35.149999999999494
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 5.914625724734941, "get_ui_image": 0.01983006234468673, "step_physics": 0.14343211012086374, "survival_time": 45.29999999999957, "driven_lanedir": 0.8622290679152014, "get_state_dump": 0.004811414415807419, "get_robot_state": 0.003768292651023991, "sim_render-ego0": 0.00398269351551262, "get_duckie_state": 1.2720038635859547e-06, "in-drivable-lane": 35.149999999999494, "deviation-heading": 7.138393071698855, "agent_compute-ego0": 0.01465374973932201, "complete-iteration": 0.20427126448104505, "set_robot_commands": 0.0023579928683078013, "distance-from-start": 0.6383921825542074, "deviation-center-line": 0.9237150280236306, "driven_lanedir_consec": 0.8622290679152014, "sim_compute_sim_state": 0.009356758166042946, "sim_compute_performance-ego0": 0.0019902115751390553}}
set_robot_commands_max0.0023579928683078013
set_robot_commands_mean0.0023579928683078013
set_robot_commands_median0.0023579928683078013
set_robot_commands_min0.0023579928683078013
sim_compute_performance-ego0_max0.0019902115751390553
sim_compute_performance-ego0_mean0.0019902115751390553
sim_compute_performance-ego0_median0.0019902115751390553
sim_compute_performance-ego0_min0.0019902115751390553
sim_compute_sim_state_max0.009356758166042946
sim_compute_sim_state_mean0.009356758166042946
sim_compute_sim_state_median0.009356758166042946
sim_compute_sim_state_min0.009356758166042946
sim_render-ego0_max0.00398269351551262
sim_render-ego0_mean0.00398269351551262
sim_render-ego0_median0.00398269351551262
sim_render-ego0_min0.00398269351551262
simulation-passed1
step_physics_max0.14343211012086374
step_physics_mean0.14343211012086374
step_physics_median0.14343211012086374
step_physics_min0.14343211012086374
survival_time_max45.29999999999957
survival_time_mean45.29999999999957
survival_time_min45.29999999999957
No reset possible
7534813912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-010:01:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.449999999999985
in-drivable-lane_median0.8999999999999968
driven_lanedir_consec_median2.1175776338495185
deviation-center-line_median0.35839973717404355


other stats
agent_compute-ego0_max0.10194785778339092
agent_compute-ego0_mean0.10194785778339092
agent_compute-ego0_median0.10194785778339092
agent_compute-ego0_min0.10194785778339092
complete-iteration_max0.30311637291541466
complete-iteration_mean0.30311637291541466
complete-iteration_median0.30311637291541466
complete-iteration_min0.30311637291541466
deviation-center-line_max0.35839973717404355
deviation-center-line_mean0.35839973717404355
deviation-center-line_min0.35839973717404355
deviation-heading_max1.3801853404347686
deviation-heading_mean1.3801853404347686
deviation-heading_median1.3801853404347686
deviation-heading_min1.3801853404347686
distance-from-start_max2.0398436211719253
distance-from-start_mean2.0398436211719253
distance-from-start_median2.0398436211719253
distance-from-start_min2.0398436211719253
driven_any_max2.6280917656821425
driven_any_mean2.6280917656821425
driven_any_median2.6280917656821425
driven_any_min2.6280917656821425
driven_lanedir_consec_max2.1175776338495185
driven_lanedir_consec_mean2.1175776338495185
driven_lanedir_consec_min2.1175776338495185
driven_lanedir_max2.1175776338495185
driven_lanedir_mean2.1175776338495185
driven_lanedir_median2.1175776338495185
driven_lanedir_min2.1175776338495185
get_duckie_state_max0.02565203446608323
get_duckie_state_mean0.02565203446608323
get_duckie_state_median0.02565203446608323
get_duckie_state_min0.02565203446608323
get_robot_state_max0.0040167276675884545
get_robot_state_mean0.0040167276675884545
get_robot_state_median0.0040167276675884545
get_robot_state_min0.0040167276675884545
get_state_dump_max0.009221885754511909
get_state_dump_mean0.009221885754511909
get_state_dump_median0.009221885754511909
get_state_dump_min0.009221885754511909
get_ui_image_max0.02148339198185847
get_ui_image_mean0.02148339198185847
get_ui_image_median0.02148339198185847
get_ui_image_min0.02148339198185847
in-drivable-lane_max0.8999999999999968
in-drivable-lane_mean0.8999999999999968
in-drivable-lane_min0.8999999999999968
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.6280917656821425, "get_ui_image": 0.02148339198185847, "step_physics": 0.12224850838000956, "survival_time": 6.449999999999985, "driven_lanedir": 2.1175776338495185, "get_state_dump": 0.009221885754511909, "get_robot_state": 0.0040167276675884545, "sim_render-ego0": 0.004115244058462289, "get_duckie_state": 0.02565203446608323, "in-drivable-lane": 0.8999999999999968, "deviation-heading": 1.3801853404347686, "agent_compute-ego0": 0.10194785778339092, "complete-iteration": 0.30311637291541466, "set_robot_commands": 0.002568173408508301, "distance-from-start": 2.0398436211719253, "deviation-center-line": 0.35839973717404355, "driven_lanedir_consec": 2.1175776338495185, "sim_compute_sim_state": 0.009602775940528285, "sim_compute_performance-ego0": 0.0021412207530095025}}
set_robot_commands_max0.002568173408508301
set_robot_commands_mean0.002568173408508301
set_robot_commands_median0.002568173408508301
set_robot_commands_min0.002568173408508301
sim_compute_performance-ego0_max0.0021412207530095025
sim_compute_performance-ego0_mean0.0021412207530095025
sim_compute_performance-ego0_median0.0021412207530095025
sim_compute_performance-ego0_min0.0021412207530095025
sim_compute_sim_state_max0.009602775940528285
sim_compute_sim_state_mean0.009602775940528285
sim_compute_sim_state_median0.009602775940528285
sim_compute_sim_state_min0.009602775940528285
sim_render-ego0_max0.004115244058462289
sim_render-ego0_mean0.004115244058462289
sim_render-ego0_median0.004115244058462289
sim_render-ego0_min0.004115244058462289
simulation-passed1
step_physics_max0.12224850838000956
step_physics_mean0.12224850838000956
step_physics_median0.12224850838000956
step_physics_min0.12224850838000956
survival_time_max6.449999999999985
survival_time_mean6.449999999999985
survival_time_min6.449999999999985
No reset possible
7534513912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-010:02:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.249999999999986
in-drivable-lane_median0.34999999999999876
driven_lanedir_consec_median2.3507103011482604
deviation-center-line_median0.4058393882535043


other stats
agent_compute-ego0_max0.10069878896077472
agent_compute-ego0_mean0.10069878896077472
agent_compute-ego0_median0.10069878896077472
agent_compute-ego0_min0.10069878896077472
complete-iteration_max0.28813262780507404
complete-iteration_mean0.28813262780507404
complete-iteration_median0.28813262780507404
complete-iteration_min0.28813262780507404
deviation-center-line_max0.4058393882535043
deviation-center-line_mean0.4058393882535043
deviation-center-line_min0.4058393882535043
deviation-heading_max1.4234624845132315
deviation-heading_mean1.4234624845132315
deviation-heading_median1.4234624845132315
deviation-heading_min1.4234624845132315
distance-from-start_max2.032382761856445
distance-from-start_mean2.032382761856445
distance-from-start_median2.032382761856445
distance-from-start_min2.032382761856445
driven_any_max2.5929663791355577
driven_any_mean2.5929663791355577
driven_any_median2.5929663791355577
driven_any_min2.5929663791355577
driven_lanedir_consec_max2.3507103011482604
driven_lanedir_consec_mean2.3507103011482604
driven_lanedir_consec_min2.3507103011482604
driven_lanedir_max2.3507103011482604
driven_lanedir_mean2.3507103011482604
driven_lanedir_median2.3507103011482604
driven_lanedir_min2.3507103011482604
get_duckie_state_max0.026445646134633863
get_duckie_state_mean0.026445646134633863
get_duckie_state_median0.026445646134633863
get_duckie_state_min0.026445646134633863
get_robot_state_max0.004115218207949684
get_robot_state_mean0.004115218207949684
get_robot_state_median0.004115218207949684
get_robot_state_min0.004115218207949684
get_state_dump_max0.009318819121708946
get_state_dump_mean0.009318819121708946
get_state_dump_median0.009318819121708946
get_state_dump_min0.009318819121708946
get_ui_image_max0.0208964328917246
get_ui_image_mean0.0208964328917246
get_ui_image_median0.0208964328917246
get_ui_image_min0.0208964328917246
in-drivable-lane_max0.34999999999999876
in-drivable-lane_mean0.34999999999999876
in-drivable-lane_min0.34999999999999876
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.5929663791355577, "get_ui_image": 0.0208964328917246, "step_physics": 0.10833534361824156, "survival_time": 6.249999999999986, "driven_lanedir": 2.3507103011482604, "get_state_dump": 0.009318819121708946, "get_robot_state": 0.004115218207949684, "sim_render-ego0": 0.004178283706543938, "get_duckie_state": 0.026445646134633863, "in-drivable-lane": 0.34999999999999876, "deviation-heading": 1.4234624845132315, "agent_compute-ego0": 0.10069878896077472, "complete-iteration": 0.28813262780507404, "set_robot_commands": 0.002519838393680633, "distance-from-start": 2.032382761856445, "deviation-center-line": 0.4058393882535043, "driven_lanedir_consec": 2.3507103011482604, "sim_compute_sim_state": 0.009410133437504844, "sim_compute_performance-ego0": 0.002098308669196235}}
set_robot_commands_max0.002519838393680633
set_robot_commands_mean0.002519838393680633
set_robot_commands_median0.002519838393680633
set_robot_commands_min0.002519838393680633
sim_compute_performance-ego0_max0.002098308669196235
sim_compute_performance-ego0_mean0.002098308669196235
sim_compute_performance-ego0_median0.002098308669196235
sim_compute_performance-ego0_min0.002098308669196235
sim_compute_sim_state_max0.009410133437504844
sim_compute_sim_state_mean0.009410133437504844
sim_compute_sim_state_median0.009410133437504844
sim_compute_sim_state_min0.009410133437504844
sim_render-ego0_max0.004178283706543938
sim_render-ego0_mean0.004178283706543938
sim_render-ego0_median0.004178283706543938
sim_render-ego0_min0.004178283706543938
simulation-passed1
step_physics_max0.10833534361824156
step_physics_mean0.10833534361824156
step_physics_median0.10833534361824156
step_physics_min0.10833534361824156
survival_time_max6.249999999999986
survival_time_mean6.249999999999986
survival_time_min6.249999999999986
No reset possible
7534213912YU CHENCBC Net v2 - testaido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-010:02:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.699999999999984
in-drivable-lane_median0.0
driven_lanedir_consec_median2.5776928835144695
deviation-center-line_median0.42757312730270974


other stats
agent_compute-ego0_max0.09188224651195386
agent_compute-ego0_mean0.09188224651195386
agent_compute-ego0_median0.09188224651195386
agent_compute-ego0_min0.09188224651195386
complete-iteration_max0.27638568701567473
complete-iteration_mean0.27638568701567473
complete-iteration_median0.27638568701567473
complete-iteration_min0.27638568701567473
deviation-center-line_max0.42757312730270974
deviation-center-line_mean0.42757312730270974
deviation-center-line_min0.42757312730270974
deviation-heading_max1.66731056288737
deviation-heading_mean1.66731056288737
deviation-heading_median1.66731056288737
deviation-heading_min1.66731056288737
distance-from-start_max2.0110031346160477
distance-from-start_mean2.0110031346160477
distance-from-start_median2.0110031346160477
distance-from-start_min2.0110031346160477
driven_any_max2.672694084395635
driven_any_mean2.672694084395635
driven_any_median2.672694084395635
driven_any_min2.672694084395635
driven_lanedir_consec_max2.5776928835144695
driven_lanedir_consec_mean2.5776928835144695
driven_lanedir_consec_min2.5776928835144695
driven_lanedir_max2.5776928835144695
driven_lanedir_mean2.5776928835144695
driven_lanedir_median2.5776928835144695
driven_lanedir_min2.5776928835144695
get_duckie_state_max0.02441174895675094
get_duckie_state_mean0.02441174895675094
get_duckie_state_median0.02441174895675094
get_duckie_state_min0.02441174895675094
get_robot_state_max0.00375329300209328
get_robot_state_mean0.00375329300209328
get_robot_state_median0.00375329300209328
get_robot_state_min0.00375329300209328
get_state_dump_max0.008746399702849211
get_state_dump_mean0.008746399702849211
get_state_dump_median0.008746399702849211
get_state_dump_min0.008746399702849211
get_ui_image_max0.020485609549063224
get_ui_image_mean0.020485609549063224
get_ui_image_median0.020485609549063224
get_ui_image_min0.020485609549063224
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.672694084395635, "get_ui_image": 0.020485609549063224, "step_physics": 0.10989730093214246, "survival_time": 6.699999999999984, "driven_lanedir": 2.5776928835144695, "get_state_dump": 0.008746399702849211, "get_robot_state": 0.00375329300209328, "sim_render-ego0": 0.003865708245171441, "get_duckie_state": 0.02441174895675094, "in-drivable-lane": 0.0, "deviation-heading": 1.66731056288737, "agent_compute-ego0": 0.09188224651195386, "complete-iteration": 0.27638568701567473, "set_robot_commands": 0.0023652094381826894, "distance-from-start": 2.0110031346160477, "deviation-center-line": 0.42757312730270974, "driven_lanedir_consec": 2.5776928835144695, "sim_compute_sim_state": 0.008952834871080187, "sim_compute_performance-ego0": 0.0019334616484465424}}
set_robot_commands_max0.0023652094381826894
set_robot_commands_mean0.0023652094381826894
set_robot_commands_median0.0023652094381826894
set_robot_commands_min0.0023652094381826894
sim_compute_performance-ego0_max0.0019334616484465424
sim_compute_performance-ego0_mean0.0019334616484465424
sim_compute_performance-ego0_median0.0019334616484465424
sim_compute_performance-ego0_min0.0019334616484465424
sim_compute_sim_state_max0.008952834871080187
sim_compute_sim_state_mean0.008952834871080187
sim_compute_sim_state_median0.008952834871080187
sim_compute_sim_state_min0.008952834871080187
sim_render-ego0_max0.003865708245171441
sim_render-ego0_mean0.003865708245171441
sim_render-ego0_median0.003865708245171441
sim_render-ego0_min0.003865708245171441
simulation-passed1
step_physics_max0.10989730093214246
step_physics_mean0.10989730093214246
step_physics_median0.10989730093214246
step_physics_min0.10989730093214246
survival_time_max6.699999999999984
survival_time_mean6.699999999999984
survival_time_min6.699999999999984
No reset possible
7533713697Samuel Alexandertemplate-pytorchaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:06:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.004721626170082
survival_time_median30.300000000000296
deviation-center-line_median1.7479266198519514
in-drivable-lane_median15.250000000000156


other stats
agent_compute-ego0_max0.015020549591919148
agent_compute-ego0_mean0.015020549591919148
agent_compute-ego0_median0.015020549591919148
agent_compute-ego0_min0.015020549591919148
complete-iteration_max0.25128039143230807
complete-iteration_mean0.25128039143230807
complete-iteration_median0.25128039143230807
complete-iteration_min0.25128039143230807
deviation-center-line_max1.7479266198519514
deviation-center-line_mean1.7479266198519514
deviation-center-line_min1.7479266198519514
deviation-heading_max11.39786618392805
deviation-heading_mean11.39786618392805
deviation-heading_median11.39786618392805
deviation-heading_min11.39786618392805
distance-from-start_max0.19504761241970353
distance-from-start_mean0.19504761241970353
distance-from-start_median0.19504761241970353
distance-from-start_min0.19504761241970353
driven_any_max2.8349685337765704
driven_any_mean2.8349685337765704
driven_any_median2.8349685337765704
driven_any_min2.8349685337765704
driven_lanedir_consec_max1.004721626170082
driven_lanedir_consec_mean1.004721626170082
driven_lanedir_consec_min1.004721626170082
driven_lanedir_max1.004721626170082
driven_lanedir_mean1.004721626170082
driven_lanedir_median1.004721626170082
driven_lanedir_min1.004721626170082
get_duckie_state_max1.3590251790632723e-06
get_duckie_state_mean1.3590251790632723e-06
get_duckie_state_median1.3590251790632723e-06
get_duckie_state_min1.3590251790632723e-06
get_robot_state_max0.0039530644896002935
get_robot_state_mean0.0039530644896002935
get_robot_state_median0.0039530644896002935
get_robot_state_min0.0039530644896002935
get_state_dump_max0.00481057638391435
get_state_dump_mean0.00481057638391435
get_state_dump_median0.00481057638391435
get_state_dump_min0.00481057638391435
get_ui_image_max0.025420766293118773
get_ui_image_mean0.025420766293118773
get_ui_image_median0.025420766293118773
get_ui_image_min0.025420766293118773
in-drivable-lane_max15.250000000000156
in-drivable-lane_mean15.250000000000156
in-drivable-lane_min15.250000000000156
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 2.8349685337765704, "get_ui_image": 0.025420766293118773, "step_physics": 0.18252881433464943, "survival_time": 30.300000000000296, "driven_lanedir": 1.004721626170082, "get_state_dump": 0.00481057638391435, "get_robot_state": 0.0039530644896002935, "sim_render-ego0": 0.0041413087232108955, "get_duckie_state": 1.3590251790632723e-06, "in-drivable-lane": 15.250000000000156, "deviation-heading": 11.39786618392805, "agent_compute-ego0": 0.015020549591919148, "complete-iteration": 0.25128039143230807, "set_robot_commands": 0.0024690392382651616, "distance-from-start": 0.19504761241970353, "deviation-center-line": 1.7479266198519514, "driven_lanedir_consec": 1.004721626170082, "sim_compute_sim_state": 0.01080247990578364, "sim_compute_performance-ego0": 0.0020461149231409516}}
set_robot_commands_max0.0024690392382651616
set_robot_commands_mean0.0024690392382651616
set_robot_commands_median0.0024690392382651616
set_robot_commands_min0.0024690392382651616
sim_compute_performance-ego0_max0.0020461149231409516
sim_compute_performance-ego0_mean0.0020461149231409516
sim_compute_performance-ego0_median0.0020461149231409516
sim_compute_performance-ego0_min0.0020461149231409516
sim_compute_sim_state_max0.01080247990578364
sim_compute_sim_state_mean0.01080247990578364
sim_compute_sim_state_median0.01080247990578364
sim_compute_sim_state_min0.01080247990578364
sim_render-ego0_max0.0041413087232108955
sim_render-ego0_mean0.0041413087232108955
sim_render-ego0_median0.0041413087232108955
sim_render-ego0_min0.0041413087232108955
simulation-passed1
step_physics_max0.18252881433464943
step_physics_mean0.18252881433464943
step_physics_median0.18252881433464943
step_physics_min0.18252881433464943
survival_time_max30.300000000000296
survival_time_mean30.300000000000296
survival_time_min30.300000000000296
No reset possible
7532313939YU CHENCBC Net v2 test - added mar 31 datasetaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:08:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median48.84999999999936
in-drivable-lane_median23.699999999999736
driven_lanedir_consec_median7.872197177291706
deviation-center-line_median1.745408905769881


other stats
agent_compute-ego0_max0.09057755387390076
agent_compute-ego0_mean0.09057755387390076
agent_compute-ego0_median0.09057755387390076
agent_compute-ego0_min0.09057755387390076
complete-iteration_max0.24401701252397096
complete-iteration_mean0.24401701252397096
complete-iteration_median0.24401701252397096
complete-iteration_min0.24401701252397096
deviation-center-line_max1.745408905769881
deviation-center-line_mean1.745408905769881
deviation-center-line_min1.745408905769881
deviation-heading_max9.284715435688664
deviation-heading_mean9.284715435688664
deviation-heading_median9.284715435688664
deviation-heading_min9.284715435688664
distance-from-start_max1.2243062737312878
distance-from-start_mean1.2243062737312878
distance-from-start_median1.2243062737312878
distance-from-start_min1.2243062737312878
driven_any_max14.871464603669969
driven_any_mean14.871464603669969
driven_any_median14.871464603669969
driven_any_min14.871464603669969
driven_lanedir_consec_max7.872197177291706
driven_lanedir_consec_mean7.872197177291706
driven_lanedir_consec_min7.872197177291706
driven_lanedir_max7.872197177291706
driven_lanedir_mean7.872197177291706
driven_lanedir_median7.872197177291706
driven_lanedir_min7.872197177291706
get_duckie_state_max0.004343626201762256
get_duckie_state_mean0.004343626201762256
get_duckie_state_median0.004343626201762256
get_duckie_state_min0.004343626201762256
get_robot_state_max0.0038164827233687017
get_robot_state_mean0.0038164827233687017
get_robot_state_median0.0038164827233687017
get_robot_state_min0.0038164827233687017
get_state_dump_max0.00549678739106972
get_state_dump_mean0.00549678739106972
get_state_dump_median0.00549678739106972
get_state_dump_min0.00549678739106972
get_ui_image_max0.01904416913147597
get_ui_image_mean0.01904416913147597
get_ui_image_median0.01904416913147597
get_ui_image_min0.01904416913147597
in-drivable-lane_max23.699999999999736
in-drivable-lane_mean23.699999999999736
in-drivable-lane_min23.699999999999736
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 14.871464603669969, "get_ui_image": 0.01904416913147597, "step_physics": 0.10674910628234924, "survival_time": 48.84999999999936, "driven_lanedir": 7.872197177291706, "get_state_dump": 0.00549678739106972, "get_robot_state": 0.0038164827233687017, "sim_render-ego0": 0.0039176770270723995, "get_duckie_state": 0.004343626201762256, "in-drivable-lane": 23.699999999999736, "deviation-heading": 9.284715435688664, "agent_compute-ego0": 0.09057755387390076, "complete-iteration": 0.24401701252397096, "set_robot_commands": 0.002364799045102484, "distance-from-start": 1.2243062737312878, "deviation-center-line": 1.745408905769881, "driven_lanedir_consec": 7.872197177291706, "sim_compute_sim_state": 0.005642336082848547, "sim_compute_performance-ego0": 0.001972080983511022}}
set_robot_commands_max0.002364799045102484
set_robot_commands_mean0.002364799045102484
set_robot_commands_median0.002364799045102484
set_robot_commands_min0.002364799045102484
sim_compute_performance-ego0_max0.001972080983511022
sim_compute_performance-ego0_mean0.001972080983511022
sim_compute_performance-ego0_median0.001972080983511022
sim_compute_performance-ego0_min0.001972080983511022
sim_compute_sim_state_max0.005642336082848547
sim_compute_sim_state_mean0.005642336082848547
sim_compute_sim_state_median0.005642336082848547
sim_compute_sim_state_min0.005642336082848547
sim_render-ego0_max0.0039176770270723995
sim_render-ego0_mean0.0039176770270723995
sim_render-ego0_median0.0039176770270723995
sim_render-ego0_min0.0039176770270723995
simulation-passed1
step_physics_max0.10674910628234924
step_physics_mean0.10674910628234924
step_physics_median0.10674910628234924
step_physics_min0.10674910628234924
survival_time_max48.84999999999936
survival_time_mean48.84999999999936
survival_time_min48.84999999999936
No reset possible
7531213941YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bcaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:09:58
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median30.54999999999939
driven_lanedir_consec_median11.32164249820528
deviation-center-line_median2.2233713941799516


other stats
agent_compute-ego0_max0.09538926828116004
agent_compute-ego0_mean0.09538926828116004
agent_compute-ego0_median0.09538926828116004
agent_compute-ego0_min0.09538926828116004
complete-iteration_max0.24995938705266463
complete-iteration_mean0.24995938705266463
complete-iteration_median0.24995938705266463
complete-iteration_min0.24995938705266463
deviation-center-line_max2.2233713941799516
deviation-center-line_mean2.2233713941799516
deviation-center-line_min2.2233713941799516
deviation-heading_max14.493539121309254
deviation-heading_mean14.493539121309254
deviation-heading_median14.493539121309254
deviation-heading_min14.493539121309254
distance-from-start_max1.269636757155848
distance-from-start_mean1.269636757155848
distance-from-start_median1.269636757155848
distance-from-start_min1.269636757155848
driven_any_max23.784356522854633
driven_any_mean23.784356522854633
driven_any_median23.784356522854633
driven_any_min23.784356522854633
driven_lanedir_consec_max11.32164249820528
driven_lanedir_consec_mean11.32164249820528
driven_lanedir_consec_min11.32164249820528
driven_lanedir_max11.32164249820528
driven_lanedir_mean11.32164249820528
driven_lanedir_median11.32164249820528
driven_lanedir_min11.32164249820528
get_duckie_state_max0.004313230713043086
get_duckie_state_mean0.004313230713043086
get_duckie_state_median0.004313230713043086
get_duckie_state_min0.004313230713043086
get_robot_state_max0.00379465223847579
get_robot_state_mean0.00379465223847579
get_robot_state_median0.00379465223847579
get_robot_state_min0.00379465223847579
get_state_dump_max0.005537537313520064
get_state_dump_mean0.005537537313520064
get_state_dump_median0.005537537313520064
get_state_dump_min0.005537537313520064
get_ui_image_max0.018749061968801023
get_ui_image_mean0.018749061968801023
get_ui_image_median0.018749061968801023
get_ui_image_min0.018749061968801023
in-drivable-lane_max30.54999999999939
in-drivable-lane_mean30.54999999999939
in-drivable-lane_min30.54999999999939
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 23.784356522854633, "get_ui_image": 0.018749061968801023, "step_physics": 0.10816265026000416, "survival_time": 59.99999999999873, "driven_lanedir": 11.32164249820528, "get_state_dump": 0.005537537313520064, "get_robot_state": 0.00379465223847579, "sim_render-ego0": 0.0038968954951836606, "get_duckie_state": 0.004313230713043086, "in-drivable-lane": 30.54999999999939, "deviation-heading": 14.493539121309254, "agent_compute-ego0": 0.09538926828116004, "complete-iteration": 0.24995938705266463, "set_robot_commands": 0.002367511776265852, "distance-from-start": 1.269636757155848, "deviation-center-line": 2.2233713941799516, "driven_lanedir_consec": 11.32164249820528, "sim_compute_sim_state": 0.005681380741205144, "sim_compute_performance-ego0": 0.0019738924294883864}}
set_robot_commands_max0.002367511776265852
set_robot_commands_mean0.002367511776265852
set_robot_commands_median0.002367511776265852
set_robot_commands_min0.002367511776265852
sim_compute_performance-ego0_max0.0019738924294883864
sim_compute_performance-ego0_mean0.0019738924294883864
sim_compute_performance-ego0_median0.0019738924294883864
sim_compute_performance-ego0_min0.0019738924294883864
sim_compute_sim_state_max0.005681380741205144
sim_compute_sim_state_mean0.005681380741205144
sim_compute_sim_state_median0.005681380741205144
sim_compute_sim_state_min0.005681380741205144
sim_render-ego0_max0.0038968954951836606
sim_render-ego0_mean0.0038968954951836606
sim_render-ego0_median0.0038968954951836606
sim_render-ego0_min0.0038968954951836606
simulation-passed1
step_physics_max0.10816265026000416
step_physics_mean0.10816265026000416
step_physics_median0.10816265026000416
step_physics_min0.10816265026000416
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7531113732YU CHENBC Net V2aido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:01:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.0809829640758255
survival_time_median4.599999999999992
deviation-center-line_median0.1320309247068626
in-drivable-lane_median1.699999999999994


other stats
agent_compute-ego0_max0.06494849471635716
agent_compute-ego0_mean0.06494849471635716
agent_compute-ego0_median0.06494849471635716
agent_compute-ego0_min0.06494849471635716
complete-iteration_max0.24778504012733377
complete-iteration_mean0.24778504012733377
complete-iteration_median0.24778504012733377
complete-iteration_min0.24778504012733377
deviation-center-line_max0.1320309247068626
deviation-center-line_mean0.1320309247068626
deviation-center-line_min0.1320309247068626
deviation-heading_max0.7612711557107422
deviation-heading_mean0.7612711557107422
deviation-heading_median0.7612711557107422
deviation-heading_min0.7612711557107422
distance-from-start_max1.3387446781005392
distance-from-start_mean1.3387446781005392
distance-from-start_median1.3387446781005392
distance-from-start_min1.3387446781005392
driven_any_max1.5887947345485487
driven_any_mean1.5887947345485487
driven_any_median1.5887947345485487
driven_any_min1.5887947345485487
driven_lanedir_consec_max1.0809829640758255
driven_lanedir_consec_mean1.0809829640758255
driven_lanedir_consec_min1.0809829640758255
driven_lanedir_max1.0809829640758255
driven_lanedir_mean1.0809829640758255
driven_lanedir_median1.0809829640758255
driven_lanedir_min1.0809829640758255
get_duckie_state_max1.6638027724399362e-06
get_duckie_state_mean1.6638027724399362e-06
get_duckie_state_median1.6638027724399362e-06
get_duckie_state_min1.6638027724399362e-06
get_robot_state_max0.004322749312205981
get_robot_state_mean0.004322749312205981
get_robot_state_median0.004322749312205981
get_robot_state_min0.004322749312205981
get_state_dump_max0.005592810210361275
get_state_dump_mean0.005592810210361275
get_state_dump_median0.005592810210361275
get_state_dump_min0.005592810210361275
get_ui_image_max0.026331501622353832
get_ui_image_mean0.026331501622353832
get_ui_image_median0.026331501622353832
get_ui_image_min0.026331501622353832
in-drivable-lane_max1.699999999999994
in-drivable-lane_mean1.699999999999994
in-drivable-lane_min1.699999999999994
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 1.5887947345485487, "get_ui_image": 0.026331501622353832, "step_physics": 0.12711757485584546, "survival_time": 4.599999999999992, "driven_lanedir": 1.0809829640758255, "get_state_dump": 0.005592810210361275, "get_robot_state": 0.004322749312205981, "sim_render-ego0": 0.004555589409284694, "get_duckie_state": 1.6638027724399362e-06, "in-drivable-lane": 1.699999999999994, "deviation-heading": 0.7612711557107422, "agent_compute-ego0": 0.06494849471635716, "complete-iteration": 0.24778504012733377, "set_robot_commands": 0.00263659672070575, "distance-from-start": 1.3387446781005392, "deviation-center-line": 0.1320309247068626, "driven_lanedir_consec": 1.0809829640758255, "sim_compute_sim_state": 0.009907986528129988, "sim_compute_performance-ego0": 0.002261105404105238}}
set_robot_commands_max0.00263659672070575
set_robot_commands_mean0.00263659672070575
set_robot_commands_median0.00263659672070575
set_robot_commands_min0.00263659672070575
sim_compute_performance-ego0_max0.002261105404105238
sim_compute_performance-ego0_mean0.002261105404105238
sim_compute_performance-ego0_median0.002261105404105238
sim_compute_performance-ego0_min0.002261105404105238
sim_compute_sim_state_max0.009907986528129988
sim_compute_sim_state_mean0.009907986528129988
sim_compute_sim_state_median0.009907986528129988
sim_compute_sim_state_min0.009907986528129988
sim_render-ego0_max0.004555589409284694
sim_render-ego0_mean0.004555589409284694
sim_render-ego0_median0.004555589409284694
sim_render-ego0_min0.004555589409284694
simulation-passed1
step_physics_max0.12711757485584546
step_physics_mean0.12711757485584546
step_physics_median0.12711757485584546
step_physics_min0.12711757485584546
survival_time_max4.599999999999992
survival_time_mean4.599999999999992
survival_time_min4.599999999999992
No reset possible
7530313732YU CHENBC Net V2aido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:03:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.8467687179477377
survival_time_median7.449999999999981
deviation-center-line_median0.5372383500277994
in-drivable-lane_median1.5499999999999945


other stats
agent_compute-ego0_max0.057611905733744306
agent_compute-ego0_mean0.057611905733744306
agent_compute-ego0_median0.057611905733744306
agent_compute-ego0_min0.057611905733744306
complete-iteration_max0.20935432434082032
complete-iteration_mean0.20935432434082032
complete-iteration_median0.20935432434082032
complete-iteration_min0.20935432434082032
deviation-center-line_max0.5372383500277994
deviation-center-line_mean0.5372383500277994
deviation-center-line_min0.5372383500277994
deviation-heading_max2.3605482844764665
deviation-heading_mean2.3605482844764665
deviation-heading_median2.3605482844764665
deviation-heading_min2.3605482844764665
distance-from-start_max2.138439617588162
distance-from-start_mean2.138439617588162
distance-from-start_median2.138439617588162
distance-from-start_min2.138439617588162
driven_any_max2.775317851736479
driven_any_mean2.775317851736479
driven_any_median2.775317851736479
driven_any_min2.775317851736479
driven_lanedir_consec_max1.8467687179477377
driven_lanedir_consec_mean1.8467687179477377
driven_lanedir_consec_min1.8467687179477377
driven_lanedir_max1.8467687179477377
driven_lanedir_mean1.8467687179477377
driven_lanedir_median1.8467687179477377
driven_lanedir_min1.8467687179477377
get_duckie_state_max2.02178955078125e-06
get_duckie_state_mean2.02178955078125e-06
get_duckie_state_median2.02178955078125e-06
get_duckie_state_min2.02178955078125e-06
get_robot_state_max0.0038389221827189127
get_robot_state_mean0.0038389221827189127
get_robot_state_median0.0038389221827189127
get_robot_state_min0.0038389221827189127
get_state_dump_max0.00489376703898112
get_state_dump_mean0.00489376703898112
get_state_dump_median0.00489376703898112
get_state_dump_min0.00489376703898112
get_ui_image_max0.019717400868733723
get_ui_image_mean0.019717400868733723
get_ui_image_median0.019717400868733723
get_ui_image_min0.019717400868733723
in-drivable-lane_max1.5499999999999945
in-drivable-lane_mean1.5499999999999945
in-drivable-lane_min1.5499999999999945
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.775317851736479, "get_ui_image": 0.019717400868733723, "step_physics": 0.10615557034810384, "survival_time": 7.449999999999981, "driven_lanedir": 1.8467687179477377, "get_state_dump": 0.00489376703898112, "get_robot_state": 0.0038389221827189127, "sim_render-ego0": 0.004020303090413411, "get_duckie_state": 2.02178955078125e-06, "in-drivable-lane": 1.5499999999999945, "deviation-heading": 2.3605482844764665, "agent_compute-ego0": 0.057611905733744306, "complete-iteration": 0.20935432434082032, "set_robot_commands": 0.002434352238972982, "distance-from-start": 2.138439617588162, "deviation-center-line": 0.5372383500277994, "driven_lanedir_consec": 1.8467687179477377, "sim_compute_sim_state": 0.008526466687520344, "sim_compute_performance-ego0": 0.002059575716654459}}
set_robot_commands_max0.002434352238972982
set_robot_commands_mean0.002434352238972982
set_robot_commands_median0.002434352238972982
set_robot_commands_min0.002434352238972982
sim_compute_performance-ego0_max0.002059575716654459
sim_compute_performance-ego0_mean0.002059575716654459
sim_compute_performance-ego0_median0.002059575716654459
sim_compute_performance-ego0_min0.002059575716654459
sim_compute_sim_state_max0.008526466687520344
sim_compute_sim_state_mean0.008526466687520344
sim_compute_sim_state_median0.008526466687520344
sim_compute_sim_state_min0.008526466687520344
sim_render-ego0_max0.004020303090413411
sim_render-ego0_mean0.004020303090413411
sim_render-ego0_median0.004020303090413411
sim_render-ego0_min0.004020303090413411
simulation-passed1
step_physics_max0.10615557034810384
step_physics_mean0.10615557034810384
step_physics_median0.10615557034810384
step_physics_min0.10615557034810384
survival_time_max7.449999999999981
survival_time_mean7.449999999999981
survival_time_min7.449999999999981
No reset possible
7530113798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortedyesgpu-production-spot-0-010:00:22
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7529913798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortedyesgpu-production-spot-0-010:00:47
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 944, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7529613911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:06:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median8.169713696274194
survival_time_median33.25000000000025
deviation-center-line_median1.7895115421538923
in-drivable-lane_median8.850000000000035


other stats
agent_compute-ego0_max0.09198931888775068
agent_compute-ego0_mean0.09198931888775068
agent_compute-ego0_median0.09198931888775068
agent_compute-ego0_min0.09198931888775068
complete-iteration_max0.2876519179558969
complete-iteration_mean0.2876519179558969
complete-iteration_median0.2876519179558969
complete-iteration_min0.2876519179558969
deviation-center-line_max1.7895115421538923
deviation-center-line_mean1.7895115421538923
deviation-center-line_min1.7895115421538923
deviation-heading_max7.198411217727784
deviation-heading_mean7.198411217727784
deviation-heading_median7.198411217727784
deviation-heading_min7.198411217727784
distance-from-start_max3.676945704522359
distance-from-start_mean3.676945704522359
distance-from-start_median3.676945704522359
distance-from-start_min3.676945704522359
driven_any_max12.428457701540337
driven_any_mean12.428457701540337
driven_any_median12.428457701540337
driven_any_min12.428457701540337
driven_lanedir_consec_max8.169713696274194
driven_lanedir_consec_mean8.169713696274194
driven_lanedir_consec_min8.169713696274194
driven_lanedir_max8.169713696274194
driven_lanedir_mean8.169713696274194
driven_lanedir_median8.169713696274194
driven_lanedir_min8.169713696274194
get_duckie_state_max1.3431629261097034e-06
get_duckie_state_mean1.3431629261097034e-06
get_duckie_state_median1.3431629261097034e-06
get_duckie_state_min1.3431629261097034e-06
get_robot_state_max0.0038627239318939303
get_robot_state_mean0.0038627239318939303
get_robot_state_median0.0038627239318939303
get_robot_state_min0.0038627239318939303
get_state_dump_max0.004751746718947952
get_state_dump_mean0.004751746718947952
get_state_dump_median0.004751746718947952
get_state_dump_min0.004751746718947952
get_ui_image_max0.02480438629070202
get_ui_image_mean0.02480438629070202
get_ui_image_median0.02480438629070202
get_ui_image_min0.02480438629070202
in-drivable-lane_max8.850000000000035
in-drivable-lane_mean8.850000000000035
in-drivable-lane_min8.850000000000035
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 12.428457701540337, "get_ui_image": 0.02480438629070202, "step_physics": 0.14080717255761316, "survival_time": 33.25000000000025, "driven_lanedir": 8.169713696274194, "get_state_dump": 0.004751746718947952, "get_robot_state": 0.0038627239318939303, "sim_render-ego0": 0.004064922934179907, "get_duckie_state": 1.3431629261097034e-06, "in-drivable-lane": 8.850000000000035, "deviation-heading": 7.198411217727784, "agent_compute-ego0": 0.09198931888775068, "complete-iteration": 0.2876519179558969, "set_robot_commands": 0.002416405591878805, "distance-from-start": 3.676945704522359, "deviation-center-line": 1.7895115421538923, "driven_lanedir_consec": 8.169713696274194, "sim_compute_sim_state": 0.012858173152705928, "sim_compute_performance-ego0": 0.0020070602227975657}}
set_robot_commands_max0.002416405591878805
set_robot_commands_mean0.002416405591878805
set_robot_commands_median0.002416405591878805
set_robot_commands_min0.002416405591878805
sim_compute_performance-ego0_max0.0020070602227975657
sim_compute_performance-ego0_mean0.0020070602227975657
sim_compute_performance-ego0_median0.0020070602227975657
sim_compute_performance-ego0_min0.0020070602227975657
sim_compute_sim_state_max0.012858173152705928
sim_compute_sim_state_mean0.012858173152705928
sim_compute_sim_state_median0.012858173152705928
sim_compute_sim_state_min0.012858173152705928
sim_render-ego0_max0.004064922934179907
sim_render-ego0_mean0.004064922934179907
sim_render-ego0_median0.004064922934179907
sim_render-ego0_min0.004064922934179907
simulation-passed1
step_physics_max0.14080717255761316
step_physics_mean0.14080717255761316
step_physics_median0.14080717255761316
step_physics_min0.14080717255761316
survival_time_max33.25000000000025
survival_time_mean33.25000000000025
survival_time_min33.25000000000025
No reset possible
7529213911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:02:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.344186415304674
survival_time_median12.25000000000004
deviation-center-line_median0.6455340052110284
in-drivable-lane_median1.8499999999999996


other stats
agent_compute-ego0_max0.09241690189857792
agent_compute-ego0_mean0.09241690189857792
agent_compute-ego0_median0.09241690189857792
agent_compute-ego0_min0.09241690189857792
complete-iteration_max0.27130815362542626
complete-iteration_mean0.27130815362542626
complete-iteration_median0.27130815362542626
complete-iteration_min0.27130815362542626
deviation-center-line_max0.6455340052110284
deviation-center-line_mean0.6455340052110284
deviation-center-line_min0.6455340052110284
deviation-heading_max2.121102307759451
deviation-heading_mean2.121102307759451
deviation-heading_median2.121102307759451
deviation-heading_min2.121102307759451
distance-from-start_max2.0810797892725494
distance-from-start_mean2.0810797892725494
distance-from-start_median2.0810797892725494
distance-from-start_min2.0810797892725494
driven_any_max4.402918107578135
driven_any_mean4.402918107578135
driven_any_median4.402918107578135
driven_any_min4.402918107578135
driven_lanedir_consec_max3.344186415304674
driven_lanedir_consec_mean3.344186415304674
driven_lanedir_consec_min3.344186415304674
driven_lanedir_max3.344186415304674
driven_lanedir_mean3.344186415304674
driven_lanedir_median3.344186415304674
driven_lanedir_min3.344186415304674
get_duckie_state_max1.2560588557545732e-06
get_duckie_state_mean1.2560588557545732e-06
get_duckie_state_median1.2560588557545732e-06
get_duckie_state_min1.2560588557545732e-06
get_robot_state_max0.003783519675091999
get_robot_state_mean0.003783519675091999
get_robot_state_median0.003783519675091999
get_robot_state_min0.003783519675091999
get_state_dump_max0.00481619873667151
get_state_dump_mean0.00481619873667151
get_state_dump_median0.00481619873667151
get_state_dump_min0.00481619873667151
get_ui_image_max0.024412639741975117
get_ui_image_mean0.024412639741975117
get_ui_image_median0.024412639741975117
get_ui_image_min0.024412639741975117
in-drivable-lane_max1.8499999999999996
in-drivable-lane_mean1.8499999999999996
in-drivable-lane_min1.8499999999999996
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 4.402918107578135, "get_ui_image": 0.024412639741975117, "step_physics": 0.12736489714645757, "survival_time": 12.25000000000004, "driven_lanedir": 3.344186415304674, "get_state_dump": 0.00481619873667151, "get_robot_state": 0.003783519675091999, "sim_render-ego0": 0.004001170639100113, "get_duckie_state": 1.2560588557545732e-06, "in-drivable-lane": 1.8499999999999996, "deviation-heading": 2.121102307759451, "agent_compute-ego0": 0.09241690189857792, "complete-iteration": 0.27130815362542626, "set_robot_commands": 0.002432876486119216, "distance-from-start": 2.0810797892725494, "deviation-center-line": 0.6455340052110284, "driven_lanedir_consec": 3.344186415304674, "sim_compute_sim_state": 0.010002142045556044, "sim_compute_performance-ego0": 0.0019814212147782487}}
set_robot_commands_max0.002432876486119216
set_robot_commands_mean0.002432876486119216
set_robot_commands_median0.002432876486119216
set_robot_commands_min0.002432876486119216
sim_compute_performance-ego0_max0.0019814212147782487
sim_compute_performance-ego0_mean0.0019814212147782487
sim_compute_performance-ego0_median0.0019814212147782487
sim_compute_performance-ego0_min0.0019814212147782487
sim_compute_sim_state_max0.010002142045556044
sim_compute_sim_state_mean0.010002142045556044
sim_compute_sim_state_median0.010002142045556044
sim_compute_sim_state_min0.010002142045556044
sim_render-ego0_max0.004001170639100113
sim_render-ego0_mean0.004001170639100113
sim_render-ego0_median0.004001170639100113
sim_render-ego0_min0.004001170639100113
simulation-passed1
step_physics_max0.12736489714645757
step_physics_mean0.12736489714645757
step_physics_median0.12736489714645757
step_physics_min0.12736489714645757
survival_time_max12.25000000000004
survival_time_mean12.25000000000004
survival_time_min12.25000000000004
No reset possible
7529013911YU CHENCBC Net v2 - testaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:01:59
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median2.431875850002865
survival_time_median6.5999999999999845
deviation-center-line_median0.5085493539094987
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.09909799941500327
agent_compute-ego0_mean0.09909799941500327
agent_compute-ego0_median0.09909799941500327
agent_compute-ego0_min0.09909799941500327
complete-iteration_max0.26085427291411206
complete-iteration_mean0.26085427291411206
complete-iteration_median0.26085427291411206
complete-iteration_min0.26085427291411206
deviation-center-line_max0.5085493539094987
deviation-center-line_mean0.5085493539094987
deviation-center-line_min0.5085493539094987
deviation-heading_max1.52790354930775
deviation-heading_mean1.52790354930775
deviation-heading_median1.52790354930775
deviation-heading_min1.52790354930775
distance-from-start_max2.012466626869996
distance-from-start_mean2.012466626869996
distance-from-start_median2.012466626869996
distance-from-start_min2.012466626869996
driven_any_max2.528754398713022
driven_any_mean2.528754398713022
driven_any_median2.528754398713022
driven_any_min2.528754398713022
driven_lanedir_consec_max2.431875850002865
driven_lanedir_consec_mean2.431875850002865
driven_lanedir_consec_min2.431875850002865
driven_lanedir_max2.431875850002865
driven_lanedir_mean2.431875850002865
driven_lanedir_median2.431875850002865
driven_lanedir_min2.431875850002865
get_duckie_state_max1.6187366686369244e-06
get_duckie_state_mean1.6187366686369244e-06
get_duckie_state_median1.6187366686369244e-06
get_duckie_state_min1.6187366686369244e-06
get_robot_state_max0.003985058992428887
get_robot_state_mean0.003985058992428887
get_robot_state_median0.003985058992428887
get_robot_state_min0.003985058992428887
get_state_dump_max0.005076481883687184
get_state_dump_mean0.005076481883687184
get_state_dump_median0.005076481883687184
get_state_dump_min0.005076481883687184
get_ui_image_max0.020576055784870807
get_ui_image_mean0.020576055784870807
get_ui_image_median0.020576055784870807
get_ui_image_min0.020576055784870807
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.528754398713022, "get_ui_image": 0.020576055784870807, "step_physics": 0.11432176783568876, "survival_time": 6.5999999999999845, "driven_lanedir": 2.431875850002865, "get_state_dump": 0.005076481883687184, "get_robot_state": 0.003985058992428887, "sim_render-ego0": 0.004155038890982033, "get_duckie_state": 1.6187366686369244e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.52790354930775, "agent_compute-ego0": 0.09909799941500327, "complete-iteration": 0.26085427291411206, "set_robot_commands": 0.0025466044146315496, "distance-from-start": 2.012466626869996, "deviation-center-line": 0.5085493539094987, "driven_lanedir_consec": 2.431875850002865, "sim_compute_sim_state": 0.00885312897818429, "sim_compute_performance-ego0": 0.002142931285657381}}
set_robot_commands_max0.0025466044146315496
set_robot_commands_mean0.0025466044146315496
set_robot_commands_median0.0025466044146315496
set_robot_commands_min0.0025466044146315496
sim_compute_performance-ego0_max0.002142931285657381
sim_compute_performance-ego0_mean0.002142931285657381
sim_compute_performance-ego0_median0.002142931285657381
sim_compute_performance-ego0_min0.002142931285657381
sim_compute_sim_state_max0.00885312897818429
sim_compute_sim_state_mean0.00885312897818429
sim_compute_sim_state_median0.00885312897818429
sim_compute_sim_state_min0.00885312897818429
sim_render-ego0_max0.004155038890982033
sim_render-ego0_mean0.004155038890982033
sim_render-ego0_median0.004155038890982033
sim_render-ego0_min0.004155038890982033
simulation-passed1
step_physics_max0.11432176783568876
step_physics_mean0.11432176783568876
step_physics_median0.11432176783568876
step_physics_min0.11432176783568876
survival_time_max6.5999999999999845
survival_time_mean6.5999999999999845
survival_time_min6.5999999999999845
No reset possible
7528313938YU CHENCBC Net v2 test - added mar 31 datasetaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:10:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median13.93744030694645
survival_time_median59.99999999999873
deviation-center-line_median2.8075550617938805
in-drivable-lane_median23.34999999999944


other stats
agent_compute-ego0_max0.09219929498994876
agent_compute-ego0_mean0.09219929498994876
agent_compute-ego0_median0.09219929498994876
agent_compute-ego0_min0.09219929498994876
complete-iteration_max0.24835517186110065
complete-iteration_mean0.24835517186110065
complete-iteration_median0.24835517186110065
complete-iteration_min0.24835517186110065
deviation-center-line_max2.8075550617938805
deviation-center-line_mean2.8075550617938805
deviation-center-line_min2.8075550617938805
deviation-heading_max8.303650987085593
deviation-heading_mean8.303650987085593
deviation-heading_median8.303650987085593
deviation-heading_min8.303650987085593
distance-from-start_max2.97841479210523
distance-from-start_mean2.97841479210523
distance-from-start_median2.97841479210523
distance-from-start_min2.97841479210523
driven_any_max22.527681554015828
driven_any_mean22.527681554015828
driven_any_median22.527681554015828
driven_any_min22.527681554015828
driven_lanedir_consec_max13.93744030694645
driven_lanedir_consec_mean13.93744030694645
driven_lanedir_consec_min13.93744030694645
driven_lanedir_max13.93744030694645
driven_lanedir_mean13.93744030694645
driven_lanedir_median13.93744030694645
driven_lanedir_min13.93744030694645
get_duckie_state_max1.3693683252644282e-06
get_duckie_state_mean1.3693683252644282e-06
get_duckie_state_median1.3693683252644282e-06
get_duckie_state_min1.3693683252644282e-06
get_robot_state_max0.0039090385643469104
get_robot_state_mean0.0039090385643469104
get_robot_state_median0.0039090385643469104
get_robot_state_min0.0039090385643469104
get_state_dump_max0.004904383922198928
get_state_dump_mean0.004904383922198928
get_state_dump_median0.004904383922198928
get_state_dump_min0.004904383922198928
get_ui_image_max0.020078665410152186
get_ui_image_mean0.020078665410152186
get_ui_image_median0.020078665410152186
get_ui_image_min0.020078665410152186
in-drivable-lane_max23.34999999999944
in-drivable-lane_mean23.34999999999944
in-drivable-lane_min23.34999999999944
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 22.527681554015828, "get_ui_image": 0.020078665410152186, "step_physics": 0.11032333977514262, "survival_time": 59.99999999999873, "driven_lanedir": 13.93744030694645, "get_state_dump": 0.004904383922198928, "get_robot_state": 0.0039090385643469104, "sim_render-ego0": 0.004050111095673039, "get_duckie_state": 1.3693683252644282e-06, "in-drivable-lane": 23.34999999999944, "deviation-heading": 8.303650987085593, "agent_compute-ego0": 0.09219929498994876, "complete-iteration": 0.24835517186110065, "set_robot_commands": 0.002449579977373795, "distance-from-start": 2.97841479210523, "deviation-center-line": 2.8075550617938805, "driven_lanedir_consec": 13.93744030694645, "sim_compute_sim_state": 0.008274966135906439, "sim_compute_performance-ego0": 0.0020719704084054915}}
set_robot_commands_max0.002449579977373795
set_robot_commands_mean0.002449579977373795
set_robot_commands_median0.002449579977373795
set_robot_commands_min0.002449579977373795
sim_compute_performance-ego0_max0.0020719704084054915
sim_compute_performance-ego0_mean0.0020719704084054915
sim_compute_performance-ego0_median0.0020719704084054915
sim_compute_performance-ego0_min0.0020719704084054915
sim_compute_sim_state_max0.008274966135906439
sim_compute_sim_state_mean0.008274966135906439
sim_compute_sim_state_median0.008274966135906439
sim_compute_sim_state_min0.008274966135906439
sim_render-ego0_max0.004050111095673039
sim_render-ego0_mean0.004050111095673039
sim_render-ego0_median0.004050111095673039
sim_render-ego0_min0.004050111095673039
simulation-passed1
step_physics_max0.11032333977514262
step_physics_mean0.11032333977514262
step_physics_median0.11032333977514262
step_physics_min0.11032333977514262
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7527913940YU CHENCBC Net v2 test - added mar 31 anomaly + mar 28 bcaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:06:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median5.7800989560971265
survival_time_median31.500000000000313
deviation-center-line_median1.4972239253957331
in-drivable-lane_median14.800000000000164


other stats
agent_compute-ego0_max0.0905532716004482
agent_compute-ego0_mean0.0905532716004482
agent_compute-ego0_median0.0905532716004482
agent_compute-ego0_min0.0905532716004482
complete-iteration_max0.27746288787732976
complete-iteration_mean0.27746288787732976
complete-iteration_median0.27746288787732976
complete-iteration_min0.27746288787732976
deviation-center-line_max1.4972239253957331
deviation-center-line_mean1.4972239253957331
deviation-center-line_min1.4972239253957331
deviation-heading_max5.601418307372564
deviation-heading_mean5.601418307372564
deviation-heading_median5.601418307372564
deviation-heading_min5.601418307372564
distance-from-start_max3.5419248795446427
distance-from-start_mean3.5419248795446427
distance-from-start_median3.5419248795446427
distance-from-start_min3.5419248795446427
driven_any_max12.137889247885642
driven_any_mean12.137889247885642
driven_any_median12.137889247885642
driven_any_min12.137889247885642
driven_lanedir_consec_max5.7800989560971265
driven_lanedir_consec_mean5.7800989560971265
driven_lanedir_consec_min5.7800989560971265
driven_lanedir_max5.7800989560971265
driven_lanedir_mean5.7800989560971265
driven_lanedir_median5.7800989560971265
driven_lanedir_min5.7800989560971265
get_duckie_state_max2.106849440681953e-06
get_duckie_state_mean2.106849440681953e-06
get_duckie_state_median2.106849440681953e-06
get_duckie_state_min2.106849440681953e-06
get_robot_state_max0.0037565820757447257
get_robot_state_mean0.0037565820757447257
get_robot_state_median0.0037565820757447257
get_robot_state_min0.0037565820757447257
get_state_dump_max0.004815941189434941
get_state_dump_mean0.004815941189434941
get_state_dump_median0.004815941189434941
get_state_dump_min0.004815941189434941
get_ui_image_max0.024769626971092164
get_ui_image_mean0.024769626971092164
get_ui_image_median0.024769626971092164
get_ui_image_min0.024769626971092164
in-drivable-lane_max14.800000000000164
in-drivable-lane_mean14.800000000000164
in-drivable-lane_min14.800000000000164
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 12.137889247885642, "get_ui_image": 0.024769626971092164, "step_physics": 0.1326090548949083, "survival_time": 31.500000000000313, "driven_lanedir": 5.7800989560971265, "get_state_dump": 0.004815941189434941, "get_robot_state": 0.0037565820757447257, "sim_render-ego0": 0.004058287752033603, "get_duckie_state": 2.106849440681953e-06, "in-drivable-lane": 14.800000000000164, "deviation-heading": 5.601418307372564, "agent_compute-ego0": 0.0905532716004482, "complete-iteration": 0.27746288787732976, "set_robot_commands": 0.0023855086173950775, "distance-from-start": 3.5419248795446427, "deviation-center-line": 1.4972239253957331, "driven_lanedir_consec": 5.7800989560971265, "sim_compute_sim_state": 0.012393287546850044, "sim_compute_performance-ego0": 0.0020274979943518782}}
set_robot_commands_max0.0023855086173950775
set_robot_commands_mean0.0023855086173950775
set_robot_commands_median0.0023855086173950775
set_robot_commands_min0.0023855086173950775
sim_compute_performance-ego0_max0.0020274979943518782
sim_compute_performance-ego0_mean0.0020274979943518782
sim_compute_performance-ego0_median0.0020274979943518782
sim_compute_performance-ego0_min0.0020274979943518782
sim_compute_sim_state_max0.012393287546850044
sim_compute_sim_state_mean0.012393287546850044
sim_compute_sim_state_median0.012393287546850044
sim_compute_sim_state_min0.012393287546850044
sim_render-ego0_max0.004058287752033603
sim_render-ego0_mean0.004058287752033603
sim_render-ego0_median0.004058287752033603
sim_render-ego0_min0.004058287752033603
simulation-passed1
step_physics_max0.1326090548949083
step_physics_mean0.1326090548949083
step_physics_median0.1326090548949083
step_physics_min0.1326090548949083
survival_time_max31.500000000000313
survival_time_mean31.500000000000313
survival_time_min31.500000000000313
No reset possible
7522413547AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFV_multi-sim-testing426successyesgpu-production-spot-0-010:56:22
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median0.0
driven_lanedir_consec_median27.909346787841628
deviation-center-line_median2.424783726141229


other stats
agent_compute-ego0_max0.014864894373033764
agent_compute-ego0_mean0.014438509345550918
agent_compute-ego0_median0.014438509345550918
agent_compute-ego0_min0.014012124318068073
agent_compute-ego1_max0.014687939944811207
agent_compute-ego1_mean0.014230347791381125
agent_compute-ego1_median0.014230347791381125
agent_compute-ego1_min0.01377275563795104
agent_compute-ego2_max0.014832843054740455
agent_compute-ego2_mean0.014465047060500374
agent_compute-ego2_median0.014465047060500374
agent_compute-ego2_min0.014097251066260294
agent_compute-ego3_max0.014908085655510972
agent_compute-ego3_mean0.014563065583660243
agent_compute-ego3_median0.014563065583660243
agent_compute-ego3_min0.014218045511809515
complete-iteration_max0.7254101972000288
complete-iteration_mean0.6265694267247539
complete-iteration_median0.6265694267247539
complete-iteration_min0.5277286562494791
deviation-center-line_max3.006798389755633
deviation-center-line_mean2.409586240631594
deviation-center-line_min1.9153540425904776
deviation-heading_max12.738645796383668
deviation-heading_mean9.681737872967249
deviation-heading_median9.833907349348888
deviation-heading_min7.442841644968314
distance-from-start_max4.425133064120963
distance-from-start_mean3.348979898269044
distance-from-start_median3.2946137059682323
distance-from-start_min2.4751621171274465
driven_any_max29.766296281886575
driven_any_mean28.507428903729647
driven_any_median28.456281262075695
driven_any_min27.179027148097784
driven_lanedir_consec_max29.45647239009414
driven_lanedir_consec_mean27.98941992854496
driven_lanedir_consec_min26.345301192652084
driven_lanedir_max29.45647239009414
driven_lanedir_mean27.98941992854496
driven_lanedir_median27.909346787841628
driven_lanedir_min26.345301192652084
get_duckie_state_max1.7830771669360818e-06
get_duckie_state_mean1.6880869170608964e-06
get_duckie_state_median1.6880869170608964e-06
get_duckie_state_min1.593096667185711e-06
get_robot_state_max0.01539122016106319
get_robot_state_mean0.015131080875190273
get_robot_state_median0.015131080875190273
get_robot_state_min0.014870941589317354
get_state_dump_max0.0102985778716482
get_state_dump_mean0.010109537646335726
get_state_dump_median0.010109537646335726
get_state_dump_min0.009920497421023251
get_ui_image_max0.026091785057696777
get_ui_image_mean0.02362857432687015
get_ui_image_median0.02362857432687015
get_ui_image_min0.02116536359604352
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-loop-000-ego0": {"driven_any": 29.766296281886575, "get_ui_image": 0.02116536359604352, "step_physics": 0.36571227263451417, "survival_time": 59.99999999999873, "driven_lanedir": 29.45647239009414, "get_state_dump": 0.009920497421023251, "get_robot_state": 0.014870941589317354, "sim_render-ego0": 0.00400231343125622, "sim_render-ego1": 0.003935870282556691, "sim_render-ego2": 0.003942992268355066, "sim_render-ego3": 0.0039356336506280575, "get_duckie_state": 1.593096667185711e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.442841644968314, "agent_compute-ego0": 0.014012124318068073, "agent_compute-ego1": 0.01377275563795104, "agent_compute-ego2": 0.014097251066260294, "agent_compute-ego3": 0.014218045511809515, "complete-iteration": 0.5277286562494791, "set_robot_commands": 0.002340986964903108, "distance-from-start": 2.4751621171274465, "deviation-center-line": 1.9249479963166456, "driven_lanedir_consec": 29.45647239009414, "sim_compute_sim_state": 0.026428774135694416, "sim_compute_performance-ego0": 0.0021103593729417786, "sim_compute_performance-ego1": 0.001991505031283948, "sim_compute_performance-ego2": 0.002029534680559474, "sim_compute_performance-ego3": 0.0019999050677170067}, "LFV_multi-norm-loop-000-ego1": {"driven_any": 29.05804390635504, "get_ui_image": 0.02116536359604352, "step_physics": 0.36571227263451417, "survival_time": 59.99999999999873, "driven_lanedir": 28.59620162595957, "get_state_dump": 0.009920497421023251, "get_robot_state": 0.014870941589317354, "sim_render-ego0": 0.00400231343125622, "sim_render-ego1": 0.003935870282556691, "sim_render-ego2": 0.003942992268355066, "sim_render-ego3": 0.0039356336506280575, "get_duckie_state": 1.593096667185711e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.17196142808676, "agent_compute-ego0": 0.014012124318068073, "agent_compute-ego1": 0.01377275563795104, "agent_compute-ego2": 0.014097251066260294, "agent_compute-ego3": 0.014218045511809515, "complete-iteration": 0.5277286562494791, "set_robot_commands": 0.002340986964903108, "distance-from-start": 2.593242935199555, "deviation-center-line": 2.1662346860764465, "driven_lanedir_consec": 28.59620162595957, "sim_compute_sim_state": 0.026428774135694416, "sim_compute_performance-ego0": 0.0021103593729417786, "sim_compute_performance-ego1": 0.001991505031283948, "sim_compute_performance-ego2": 0.002029534680559474, "sim_compute_performance-ego3": 0.0019999050677170067}, "LFV_multi-norm-loop-000-ego2": {"driven_any": 29.479778077811, "get_ui_image": 0.02116536359604352, "step_physics": 0.36571227263451417, "survival_time": 59.99999999999873, "driven_lanedir": 29.13672838764953, "get_state_dump": 0.009920497421023251, "get_robot_state": 0.014870941589317354, "sim_render-ego0": 0.00400231343125622, "sim_render-ego1": 0.003935870282556691, "sim_render-ego2": 0.003942992268355066, "sim_render-ego3": 0.0039356336506280575, "get_duckie_state": 1.593096667185711e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.867732847211423, "agent_compute-ego0": 0.014012124318068073, "agent_compute-ego1": 0.01377275563795104, "agent_compute-ego2": 0.014097251066260294, "agent_compute-ego3": 0.014218045511809515, "complete-iteration": 0.5277286562494791, "set_robot_commands": 0.002340986964903108, "distance-from-start": 2.929118532968335, "deviation-center-line": 1.9153540425904776, "driven_lanedir_consec": 29.13672838764953, "sim_compute_sim_state": 0.026428774135694416, "sim_compute_performance-ego0": 0.0021103593729417786, "sim_compute_performance-ego1": 0.001991505031283948, "sim_compute_performance-ego2": 0.002029534680559474, "sim_compute_performance-ego3": 0.0019999050677170067}, "LFV_multi-norm-loop-000-ego3": {"driven_any": 29.49756173617188, "get_ui_image": 0.02116536359604352, "step_physics": 0.36571227263451417, "survival_time": 59.99999999999873, "driven_lanedir": 29.141775755634487, "get_state_dump": 0.009920497421023251, "get_robot_state": 0.014870941589317354, "sim_render-ego0": 0.00400231343125622, "sim_render-ego1": 0.003935870282556691, "sim_render-ego2": 0.003942992268355066, "sim_render-ego3": 0.0039356336506280575, "get_duckie_state": 1.593096667185711e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.268655515288996, "agent_compute-ego0": 0.014012124318068073, "agent_compute-ego1": 0.01377275563795104, "agent_compute-ego2": 0.014097251066260294, "agent_compute-ego3": 0.014218045511809515, "complete-iteration": 0.5277286562494791, "set_robot_commands": 0.002340986964903108, "distance-from-start": 2.5131713561821885, "deviation-center-line": 1.9652088557310936, "driven_lanedir_consec": 29.141775755634487, "sim_compute_sim_state": 0.026428774135694416, "sim_compute_performance-ego0": 0.0021103593729417786, "sim_compute_performance-ego1": 0.001991505031283948, "sim_compute_performance-ego2": 0.002029534680559474, "sim_compute_performance-ego3": 0.0019999050677170067}, "LFV_multi-norm-zigzag-000-ego0": {"driven_any": 27.179027148097784, "get_ui_image": 0.026091785057696777, "step_physics": 0.5353145146747116, "survival_time": 59.99999999999873, "driven_lanedir": 26.345301192652084, "get_state_dump": 0.0102985778716482, "get_robot_state": 0.01539122016106319, "sim_render-ego0": 0.00419678874655032, "sim_render-ego1": 0.0041044582236716394, "sim_render-ego2": 0.004114284205694778, "sim_render-ego3": 0.004147907379366377, "get_duckie_state": 1.7830771669360818e-06, "in-drivable-lane": 0.0, "deviation-heading": 12.738645796383668, "agent_compute-ego0": 0.014864894373033764, "agent_compute-ego1": 0.014687939944811207, "agent_compute-ego2": 0.014832843054740455, "agent_compute-ego3": 0.014908085655510972, "complete-iteration": 0.7254101972000288, "set_robot_commands": 0.002423875437092523, "distance-from-start": 3.800145186429762, "deviation-center-line": 3.006798389755633, "driven_lanedir_consec": 26.345301192652084, "sim_compute_sim_state": 0.044052808906116854, "sim_compute_performance-ego0": 0.0022304737002128965, "sim_compute_performance-ego1": 0.0020944567941606888, "sim_compute_performance-ego2": 0.002082335760353209, "sim_compute_performance-ego3": 0.0020863807370124707}, "LFV_multi-norm-zigzag-000-ego1": {"driven_any": 27.85451861779635, "get_ui_image": 0.026091785057696777, "step_physics": 0.5353145146747116, "survival_time": 59.99999999999873, "driven_lanedir": 27.22249194972369, "get_state_dump": 0.0102985778716482, "get_robot_state": 0.01539122016106319, "sim_render-ego0": 0.00419678874655032, "sim_render-ego1": 0.0041044582236716394, "sim_render-ego2": 0.004114284205694778, "sim_render-ego3": 0.004147907379366377, "get_duckie_state": 1.7830771669360818e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.495853270611017, "agent_compute-ego0": 0.014864894373033764, "agent_compute-ego1": 0.014687939944811207, "agent_compute-ego2": 0.014832843054740455, "agent_compute-ego3": 0.014908085655510972, "complete-iteration": 0.7254101972000288, "set_robot_commands": 0.002423875437092523, "distance-from-start": 4.395757115155974, "deviation-center-line": 2.8829400292335756, "driven_lanedir_consec": 27.22249194972369, "sim_compute_sim_state": 0.044052808906116854, "sim_compute_performance-ego0": 0.0022304737002128965, "sim_compute_performance-ego1": 0.0020944567941606888, "sim_compute_performance-ego2": 0.002082335760353209, "sim_compute_performance-ego3": 0.0020863807370124707}, "LFV_multi-norm-zigzag-000-ego2": {"driven_any": 27.561465746103607, "get_ui_image": 0.026091785057696777, "step_physics": 0.5353145146747116, "survival_time": 59.99999999999873, "driven_lanedir": 26.948629453665895, "get_state_dump": 0.0102985778716482, "get_robot_state": 0.01539122016106319, "sim_render-ego0": 0.00419678874655032, "sim_render-ego1": 0.0041044582236716394, "sim_render-ego2": 0.004114284205694778, "sim_render-ego3": 0.004147907379366377, "get_duckie_state": 1.7830771669360818e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.83983021579709, "agent_compute-ego0": 0.014864894373033764, "agent_compute-ego1": 0.014687939944811207, "agent_compute-ego2": 0.014832843054740455, "agent_compute-ego3": 0.014908085655510972, "complete-iteration": 0.7254101972000288, "set_robot_commands": 0.002423875437092523, "distance-from-start": 4.425133064120963, "deviation-center-line": 2.731873159142869, "driven_lanedir_consec": 26.948629453665895, "sim_compute_sim_state": 0.044052808906116854, "sim_compute_performance-ego0": 0.0022304737002128965, "sim_compute_performance-ego1": 0.0020944567941606888, "sim_compute_performance-ego2": 0.002082335760353209, "sim_compute_performance-ego3": 0.0020863807370124707}, "LFV_multi-norm-zigzag-000-ego3": {"driven_any": 27.66273971561494, "get_ui_image": 0.026091785057696777, "step_physics": 0.5353145146747116, "survival_time": 59.99999999999873, "driven_lanedir": 27.067758672980315, "get_state_dump": 0.0102985778716482, "get_robot_state": 0.01539122016106319, "sim_render-ego0": 0.00419678874655032, "sim_render-ego1": 0.0041044582236716394, "sim_render-ego2": 0.004114284205694778, "sim_render-ego3": 0.004147907379366377, "get_duckie_state": 1.7830771669360818e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.628382265390702, "agent_compute-ego0": 0.014864894373033764, "agent_compute-ego1": 0.014687939944811207, "agent_compute-ego2": 0.014832843054740455, "agent_compute-ego3": 0.014908085655510972, "complete-iteration": 0.7254101972000288, "set_robot_commands": 0.002423875437092523, "distance-from-start": 3.660108878968129, "deviation-center-line": 2.6833327662060116, "driven_lanedir_consec": 27.067758672980315, "sim_compute_sim_state": 0.044052808906116854, "sim_compute_performance-ego0": 0.0022304737002128965, "sim_compute_performance-ego1": 0.0020944567941606888, "sim_compute_performance-ego2": 0.002082335760353209, "sim_compute_performance-ego3": 0.0020863807370124707}}
set_robot_commands_max0.002423875437092523
set_robot_commands_mean0.0023824312009978156
set_robot_commands_median0.0023824312009978156
set_robot_commands_min0.002340986964903108
sim_compute_performance-ego0_max0.0022304737002128965
sim_compute_performance-ego0_mean0.0021704165365773378
sim_compute_performance-ego0_median0.0021704165365773378
sim_compute_performance-ego0_min0.0021103593729417786
sim_compute_performance-ego1_max0.0020944567941606888
sim_compute_performance-ego1_mean0.0020429809127223185
sim_compute_performance-ego1_median0.0020429809127223185
sim_compute_performance-ego1_min0.001991505031283948
sim_compute_performance-ego2_max0.002082335760353209
sim_compute_performance-ego2_mean0.0020559352204563416
sim_compute_performance-ego2_median0.0020559352204563416
sim_compute_performance-ego2_min0.002029534680559474
sim_compute_performance-ego3_max0.0020863807370124707
sim_compute_performance-ego3_mean0.002043142902364739
sim_compute_performance-ego3_median0.002043142902364739
sim_compute_performance-ego3_min0.0019999050677170067
sim_compute_sim_state_max0.044052808906116854
sim_compute_sim_state_mean0.03524079152090563
sim_compute_sim_state_median0.03524079152090563
sim_compute_sim_state_min0.026428774135694416
sim_render-ego0_max0.00419678874655032
sim_render-ego0_mean0.00409955108890327
sim_render-ego0_median0.00409955108890327
sim_render-ego0_min0.00400231343125622
sim_render-ego1_max0.0041044582236716394
sim_render-ego1_mean0.004020164253114165
sim_render-ego1_median0.004020164253114165
sim_render-ego1_min0.003935870282556691
sim_render-ego2_max0.004114284205694778
sim_render-ego2_mean0.004028638237024922
sim_render-ego2_median0.004028638237024922
sim_render-ego2_min0.003942992268355066
sim_render-ego3_max0.004147907379366377
sim_render-ego3_mean0.004041770514997217
sim_render-ego3_median0.004041770514997217
sim_render-ego3_min0.0039356336506280575
simulation-passed1
step_physics_max0.5353145146747116
step_physics_mean0.4505133936546129
step_physics_median0.4505133936546129
step_physics_min0.36571227263451417
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7521813964YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:10:41
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median13.308667841335172
survival_time_median59.99999999999873
deviation-center-line_median3.162991054672291
in-drivable-lane_median24.24999999999942


other stats
agent_compute-ego0_max0.09225142726691736
agent_compute-ego0_mean0.09225142726691736
agent_compute-ego0_median0.09225142726691736
agent_compute-ego0_min0.09225142726691736
complete-iteration_max0.2661624169171005
complete-iteration_mean0.2661624169171005
complete-iteration_median0.2661624169171005
complete-iteration_min0.2661624169171005
deviation-center-line_max3.162991054672291
deviation-center-line_mean3.162991054672291
deviation-center-line_min3.162991054672291
deviation-heading_max10.846610888759454
deviation-heading_mean10.846610888759454
deviation-heading_median10.846610888759454
deviation-heading_min10.846610888759454
distance-from-start_max3.564373475553053
distance-from-start_mean3.564373475553053
distance-from-start_median3.564373475553053
distance-from-start_min3.564373475553053
driven_any_max22.616353770005535
driven_any_mean22.616353770005535
driven_any_median22.616353770005535
driven_any_min22.616353770005535
driven_lanedir_consec_max13.308667841335172
driven_lanedir_consec_mean13.308667841335172
driven_lanedir_consec_min13.308667841335172
driven_lanedir_max13.308667841335172
driven_lanedir_mean13.308667841335172
driven_lanedir_median13.308667841335172
driven_lanedir_min13.308667841335172
get_duckie_state_max1.3570602887079775e-06
get_duckie_state_mean1.3570602887079775e-06
get_duckie_state_median1.3570602887079775e-06
get_duckie_state_min1.3570602887079775e-06
get_robot_state_max0.00381069774929431
get_robot_state_mean0.00381069774929431
get_robot_state_median0.00381069774929431
get_robot_state_min0.00381069774929431
get_state_dump_max0.0049030153479405385
get_state_dump_mean0.0049030153479405385
get_state_dump_median0.0049030153479405385
get_state_dump_min0.0049030153479405385
get_ui_image_max0.023039168263355163
get_ui_image_mean0.023039168263355163
get_ui_image_median0.023039168263355163
get_ui_image_min0.023039168263355163
in-drivable-lane_max24.24999999999942
in-drivable-lane_mean24.24999999999942
in-drivable-lane_min24.24999999999942
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 22.616353770005535, "get_ui_image": 0.023039168263355163, "step_physics": 0.12201357026779086, "survival_time": 59.99999999999873, "driven_lanedir": 13.308667841335172, "get_state_dump": 0.0049030153479405385, "get_robot_state": 0.00381069774929431, "sim_render-ego0": 0.004024814110215161, "get_duckie_state": 1.3570602887079775e-06, "in-drivable-lane": 24.24999999999942, "deviation-heading": 10.846610888759454, "agent_compute-ego0": 0.09225142726691736, "complete-iteration": 0.2661624169171005, "set_robot_commands": 0.0023925719312783782, "distance-from-start": 3.564373475553053, "deviation-center-line": 3.162991054672291, "driven_lanedir_consec": 13.308667841335172, "sim_compute_sim_state": 0.011594663551705364, "sim_compute_performance-ego0": 0.0020334917143123733}}
set_robot_commands_max0.0023925719312783782
set_robot_commands_mean0.0023925719312783782
set_robot_commands_median0.0023925719312783782
set_robot_commands_min0.0023925719312783782
sim_compute_performance-ego0_max0.0020334917143123733
sim_compute_performance-ego0_mean0.0020334917143123733
sim_compute_performance-ego0_median0.0020334917143123733
sim_compute_performance-ego0_min0.0020334917143123733
sim_compute_sim_state_max0.011594663551705364
sim_compute_sim_state_mean0.011594663551705364
sim_compute_sim_state_median0.011594663551705364
sim_compute_sim_state_min0.011594663551705364
sim_render-ego0_max0.004024814110215161
sim_render-ego0_mean0.004024814110215161
sim_render-ego0_median0.004024814110215161
sim_render-ego0_min0.004024814110215161
simulation-passed1
step_physics_max0.12201357026779086
step_physics_mean0.12201357026779086
step_physics_median0.12201357026779086
step_physics_min0.12201357026779086
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7521513964YU CHENCBC Net v2 test - APR 3 BC TFdata + mar 28 anomalyaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:02:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.6222586116244848
survival_time_median10.000000000000009
deviation-center-line_median0.4037722651758717
in-drivable-lane_median4.900000000000016


other stats
agent_compute-ego0_max0.1040913714698298
agent_compute-ego0_mean0.1040913714698298
agent_compute-ego0_median0.1040913714698298
agent_compute-ego0_min0.1040913714698298
complete-iteration_max0.2719207189569426
complete-iteration_mean0.2719207189569426
complete-iteration_median0.2719207189569426
complete-iteration_min0.2719207189569426
deviation-center-line_max0.4037722651758717
deviation-center-line_mean0.4037722651758717
deviation-center-line_min0.4037722651758717
deviation-heading_max2.306310340907076
deviation-heading_mean2.306310340907076
deviation-heading_median2.306310340907076
deviation-heading_min2.306310340907076
distance-from-start_max3.1883237647524534
distance-from-start_mean3.1883237647524534
distance-from-start_median3.1883237647524534
distance-from-start_min3.1883237647524534
driven_any_max3.3979885376124965
driven_any_mean3.3979885376124965
driven_any_median3.3979885376124965
driven_any_min3.3979885376124965
driven_lanedir_consec_max1.6222586116244848
driven_lanedir_consec_mean1.6222586116244848
driven_lanedir_consec_min1.6222586116244848
driven_lanedir_max1.6222586116244848
driven_lanedir_mean1.6222586116244848
driven_lanedir_median1.6222586116244848
driven_lanedir_min1.6222586116244848
get_duckie_state_max1.81838647643132e-06
get_duckie_state_mean1.81838647643132e-06
get_duckie_state_median1.81838647643132e-06
get_duckie_state_min1.81838647643132e-06
get_robot_state_max0.004164757420174518
get_robot_state_mean0.004164757420174518
get_robot_state_median0.004164757420174518
get_robot_state_min0.004164757420174518
get_state_dump_max0.00528401047436159
get_state_dump_mean0.00528401047436159
get_state_dump_median0.00528401047436159
get_state_dump_min0.00528401047436159
get_ui_image_max0.0238306344445072
get_ui_image_mean0.0238306344445072
get_ui_image_median0.0238306344445072
get_ui_image_min0.0238306344445072
in-drivable-lane_max4.900000000000016
in-drivable-lane_mean4.900000000000016
in-drivable-lane_min4.900000000000016
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 3.3979885376124965, "get_ui_image": 0.0238306344445072, "step_physics": 0.113777268585281, "survival_time": 10.000000000000009, "driven_lanedir": 1.6222586116244848, "get_state_dump": 0.00528401047436159, "get_robot_state": 0.004164757420174518, "sim_render-ego0": 0.004395582189607383, "get_duckie_state": 1.81838647643132e-06, "in-drivable-lane": 4.900000000000016, "deviation-heading": 2.306310340907076, "agent_compute-ego0": 0.1040913714698298, "complete-iteration": 0.2719207189569426, "set_robot_commands": 0.0025962632686937627, "distance-from-start": 3.1883237647524534, "deviation-center-line": 0.4037722651758717, "driven_lanedir_consec": 1.6222586116244848, "sim_compute_sim_state": 0.011381285700631972, "sim_compute_performance-ego0": 0.0022943387577189734}}
set_robot_commands_max0.0025962632686937627
set_robot_commands_mean0.0025962632686937627
set_robot_commands_median0.0025962632686937627
set_robot_commands_min0.0025962632686937627
sim_compute_performance-ego0_max0.0022943387577189734
sim_compute_performance-ego0_mean0.0022943387577189734
sim_compute_performance-ego0_median0.0022943387577189734
sim_compute_performance-ego0_min0.0022943387577189734
sim_compute_sim_state_max0.011381285700631972
sim_compute_sim_state_mean0.011381285700631972
sim_compute_sim_state_median0.011381285700631972
sim_compute_sim_state_min0.011381285700631972
sim_render-ego0_max0.004395582189607383
sim_render-ego0_mean0.004395582189607383
sim_render-ego0_median0.004395582189607383
sim_render-ego0_min0.004395582189607383
simulation-passed1
step_physics_max0.113777268585281
step_physics_mean0.113777268585281
step_physics_median0.113777268585281
step_physics_min0.113777268585281
survival_time_max10.000000000000009
survival_time_mean10.000000000000009
survival_time_min10.000000000000009
No reset possible
7520913993Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFP - Best Lossaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:03:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median5.187664515999276
survival_time_median19.500000000000146
deviation-center-line_median0.8994472356584012
in-drivable-lane_median7.950000000000072


other stats
agent_compute-ego0_max0.054309446183616855
agent_compute-ego0_mean0.054309446183616855
agent_compute-ego0_median0.054309446183616855
agent_compute-ego0_min0.054309446183616855
complete-iteration_max0.20352926095733256
complete-iteration_mean0.20352926095733256
complete-iteration_median0.20352926095733256
complete-iteration_min0.20352926095733256
deviation-center-line_max0.8994472356584012
deviation-center-line_mean0.8994472356584012
deviation-center-line_min0.8994472356584012
deviation-heading_max3.444676195233982
deviation-heading_mean3.444676195233982
deviation-heading_median3.444676195233982
deviation-heading_min3.444676195233982
distance-from-start_max3.0331439075147224
distance-from-start_mean3.0331439075147224
distance-from-start_median3.0331439075147224
distance-from-start_min3.0331439075147224
driven_any_max8.908430038802624
driven_any_mean8.908430038802624
driven_any_median8.908430038802624
driven_any_min8.908430038802624
driven_lanedir_consec_max5.187664515999276
driven_lanedir_consec_mean5.187664515999276
driven_lanedir_consec_min5.187664515999276
driven_lanedir_max5.187664515999276
driven_lanedir_mean5.187664515999276
driven_lanedir_median5.187664515999276
driven_lanedir_min5.187664515999276
get_duckie_state_max1.156116690477142e-06
get_duckie_state_mean1.156116690477142e-06
get_duckie_state_median1.156116690477142e-06
get_duckie_state_min1.156116690477142e-06
get_robot_state_max0.003628194789447443
get_robot_state_mean0.003628194789447443
get_robot_state_median0.003628194789447443
get_robot_state_min0.003628194789447443
get_state_dump_max0.004567912167600353
get_state_dump_mean0.004567912167600353
get_state_dump_median0.004567912167600353
get_state_dump_min0.004567912167600353
get_ui_image_max0.01922534371885802
get_ui_image_mean0.01922534371885802
get_ui_image_median0.01922534371885802
get_ui_image_min0.01922534371885802
in-drivable-lane_max7.950000000000072
in-drivable-lane_mean7.950000000000072
in-drivable-lane_min7.950000000000072
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 8.908430038802624, "get_ui_image": 0.01922534371885802, "step_physics": 0.10596298500704948, "survival_time": 19.500000000000146, "driven_lanedir": 5.187664515999276, "get_state_dump": 0.004567912167600353, "get_robot_state": 0.003628194789447443, "sim_render-ego0": 0.003829023722187638, "get_duckie_state": 1.156116690477142e-06, "in-drivable-lane": 7.950000000000072, "deviation-heading": 3.444676195233982, "agent_compute-ego0": 0.054309446183616855, "complete-iteration": 0.20352926095733256, "set_robot_commands": 0.00228716345394359, "distance-from-start": 3.0331439075147224, "deviation-center-line": 0.8994472356584012, "driven_lanedir_consec": 5.187664515999276, "sim_compute_sim_state": 0.007729656556073357, "sim_compute_performance-ego0": 0.001901418656644309}}
set_robot_commands_max0.00228716345394359
set_robot_commands_mean0.00228716345394359
set_robot_commands_median0.00228716345394359
set_robot_commands_min0.00228716345394359
sim_compute_performance-ego0_max0.001901418656644309
sim_compute_performance-ego0_mean0.001901418656644309
sim_compute_performance-ego0_median0.001901418656644309
sim_compute_performance-ego0_min0.001901418656644309
sim_compute_sim_state_max0.007729656556073357
sim_compute_sim_state_mean0.007729656556073357
sim_compute_sim_state_median0.007729656556073357
sim_compute_sim_state_min0.007729656556073357
sim_render-ego0_max0.003829023722187638
sim_render-ego0_mean0.003829023722187638
sim_render-ego0_median0.003829023722187638
sim_render-ego0_min0.003829023722187638
simulation-passed1
step_physics_max0.10596298500704948
step_physics_mean0.10596298500704948
step_physics_median0.10596298500704948
step_physics_min0.10596298500704948
survival_time_max19.500000000000146
survival_time_mean19.500000000000146
survival_time_min19.500000000000146
No reset possible
7520713993Frank (Chude) QianΒ πŸ‡¨πŸ‡¦CBC Net - MixTraining - Expert LF Human LFP - Best Lossaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:02:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.624533783311488
survival_time_median11.850000000000032
deviation-center-line_median0.6241879775877582
in-drivable-lane_median3.350000000000014


other stats
agent_compute-ego0_max0.0608469888943584
agent_compute-ego0_mean0.0608469888943584
agent_compute-ego0_median0.0608469888943584
agent_compute-ego0_min0.0608469888943584
complete-iteration_max0.2560066645886718
complete-iteration_mean0.2560066645886718
complete-iteration_median0.2560066645886718
complete-iteration_min0.2560066645886718
deviation-center-line_max0.6241879775877582
deviation-center-line_mean0.6241879775877582
deviation-center-line_min0.6241879775877582
deviation-heading_max3.5892031756542675
deviation-heading_mean3.5892031756542675
deviation-heading_median3.5892031756542675
deviation-heading_min3.5892031756542675
distance-from-start_max3.4237033900150657
distance-from-start_mean3.4237033900150657
distance-from-start_median3.4237033900150657
distance-from-start_min3.4237033900150657
driven_any_max5.428629446718204
driven_any_mean5.428629446718204
driven_any_median5.428629446718204
driven_any_min5.428629446718204
driven_lanedir_consec_max3.624533783311488
driven_lanedir_consec_mean3.624533783311488
driven_lanedir_consec_min3.624533783311488
driven_lanedir_max3.624533783311488
driven_lanedir_mean3.624533783311488
driven_lanedir_median3.624533783311488
driven_lanedir_min3.624533783311488
get_duckie_state_max1.7821287908473937e-06
get_duckie_state_mean1.7821287908473937e-06
get_duckie_state_median1.7821287908473937e-06
get_duckie_state_min1.7821287908473937e-06
get_robot_state_max0.004164542470659528
get_robot_state_mean0.004164542470659528
get_robot_state_median0.004164542470659528
get_robot_state_min0.004164542470659528
get_state_dump_max0.005407180104936872
get_state_dump_mean0.005407180104936872
get_state_dump_median0.005407180104936872
get_state_dump_min0.005407180104936872
get_ui_image_max0.02455234427412017
get_ui_image_mean0.02455234427412017
get_ui_image_median0.02455234427412017
get_ui_image_min0.02455234427412017
in-drivable-lane_max3.350000000000014
in-drivable-lane_mean3.350000000000014
in-drivable-lane_min3.350000000000014
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 5.428629446718204, "get_ui_image": 0.02455234427412017, "step_physics": 0.13957908874800226, "survival_time": 11.850000000000032, "driven_lanedir": 3.624533783311488, "get_state_dump": 0.005407180104936872, "get_robot_state": 0.004164542470659528, "sim_render-ego0": 0.004254424271463346, "get_duckie_state": 1.7821287908473937e-06, "in-drivable-lane": 3.350000000000014, "deviation-heading": 3.5892031756542675, "agent_compute-ego0": 0.0608469888943584, "complete-iteration": 0.2560066645886718, "set_robot_commands": 0.002566558974129813, "distance-from-start": 3.4237033900150657, "deviation-center-line": 0.6241879775877582, "driven_lanedir_consec": 3.624533783311488, "sim_compute_sim_state": 0.012274051914695931, "sim_compute_performance-ego0": 0.002254508122676561}}
set_robot_commands_max0.002566558974129813
set_robot_commands_mean0.002566558974129813
set_robot_commands_median0.002566558974129813
set_robot_commands_min0.002566558974129813
sim_compute_performance-ego0_max0.002254508122676561
sim_compute_performance-ego0_mean0.002254508122676561
sim_compute_performance-ego0_median0.002254508122676561
sim_compute_performance-ego0_min0.002254508122676561
sim_compute_sim_state_max0.012274051914695931
sim_compute_sim_state_mean0.012274051914695931
sim_compute_sim_state_median0.012274051914695931
sim_compute_sim_state_min0.012274051914695931
sim_render-ego0_max0.004254424271463346
sim_render-ego0_mean0.004254424271463346
sim_render-ego0_median0.004254424271463346
sim_render-ego0_min0.004254424271463346
simulation-passed1
step_physics_max0.13957908874800226
step_physics_mean0.13957908874800226
step_physics_median0.13957908874800226
step_physics_min0.13957908874800226
survival_time_max11.850000000000032
survival_time_mean11.850000000000032
survival_time_min11.850000000000032
No reset possible
7520413995Frank (Chude) QianΒ πŸ‡¨πŸ‡¦baseline-behavior-cloningaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:04:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median3.8563301107824337
survival_time_median21.050000000000164
deviation-center-line_median1.0714994492869234
in-drivable-lane_median3.000000000000001


other stats
agent_compute-ego0_max0.043443168509063
agent_compute-ego0_mean0.043443168509063
agent_compute-ego0_median0.043443168509063
agent_compute-ego0_min0.043443168509063
complete-iteration_max0.2183743528845186
complete-iteration_mean0.2183743528845186
complete-iteration_median0.2183743528845186
complete-iteration_min0.2183743528845186
deviation-center-line_max1.0714994492869234
deviation-center-line_mean1.0714994492869234
deviation-center-line_min1.0714994492869234
deviation-heading_max5.612232145231221
deviation-heading_mean5.612232145231221
deviation-heading_median5.612232145231221
deviation-heading_min5.612232145231221
distance-from-start_max2.1679763681873863
distance-from-start_mean2.1679763681873863
distance-from-start_median2.1679763681873863
distance-from-start_min2.1679763681873863
driven_any_max4.893486651031039
driven_any_mean4.893486651031039
driven_any_median4.893486651031039
driven_any_min4.893486651031039
driven_lanedir_consec_max3.8563301107824337
driven_lanedir_consec_mean3.8563301107824337
driven_lanedir_consec_min3.8563301107824337
driven_lanedir_max3.8563301107824337
driven_lanedir_mean3.8563301107824337
driven_lanedir_median3.8563301107824337
driven_lanedir_min3.8563301107824337
get_duckie_state_max1.7915291808792764e-06
get_duckie_state_mean1.7915291808792764e-06
get_duckie_state_median1.7915291808792764e-06
get_duckie_state_min1.7915291808792764e-06
get_robot_state_max0.00405781076982688
get_robot_state_mean0.00405781076982688
get_robot_state_median0.00405781076982688
get_robot_state_min0.00405781076982688
get_state_dump_max0.005270344386168566
get_state_dump_mean0.005270344386168566
get_state_dump_median0.005270344386168566
get_state_dump_min0.005270344386168566
get_ui_image_max0.02529425078658696
get_ui_image_mean0.02529425078658696
get_ui_image_median0.02529425078658696
get_ui_image_min0.02529425078658696
in-drivable-lane_max3.000000000000001
in-drivable-lane_mean3.000000000000001
in-drivable-lane_min3.000000000000001
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 4.893486651031039, "get_ui_image": 0.02529425078658696, "step_physics": 0.12018863002270884, "survival_time": 21.050000000000164, "driven_lanedir": 3.8563301107824337, "get_state_dump": 0.005270344386168566, "get_robot_state": 0.00405781076982688, "sim_render-ego0": 0.004270930425815673, "get_duckie_state": 1.7915291808792764e-06, "in-drivable-lane": 3.000000000000001, "deviation-heading": 5.612232145231221, "agent_compute-ego0": 0.043443168509063, "complete-iteration": 0.2183743528845186, "set_robot_commands": 0.0024896272550826953, "distance-from-start": 2.1679763681873863, "deviation-center-line": 1.0714994492869234, "driven_lanedir_consec": 3.8563301107824337, "sim_compute_sim_state": 0.011084226635395067, "sim_compute_performance-ego0": 0.0021716977747695708}}
set_robot_commands_max0.0024896272550826953
set_robot_commands_mean0.0024896272550826953
set_robot_commands_median0.0024896272550826953
set_robot_commands_min0.0024896272550826953
sim_compute_performance-ego0_max0.0021716977747695708
sim_compute_performance-ego0_mean0.0021716977747695708
sim_compute_performance-ego0_median0.0021716977747695708
sim_compute_performance-ego0_min0.0021716977747695708
sim_compute_sim_state_max0.011084226635395067
sim_compute_sim_state_mean0.011084226635395067
sim_compute_sim_state_median0.011084226635395067
sim_compute_sim_state_min0.011084226635395067
sim_render-ego0_max0.004270930425815673
sim_render-ego0_mean0.004270930425815673
sim_render-ego0_median0.004270930425815673
sim_render-ego0_min0.004270930425815673
simulation-passed1
step_physics_max0.12018863002270884
step_physics_mean0.12018863002270884
step_physics_median0.12018863002270884
step_physics_min0.12018863002270884
survival_time_max21.050000000000164
survival_time_mean21.050000000000164
survival_time_min21.050000000000164
No reset possible
7519214014YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:09:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median27.599999999999497
driven_lanedir_consec_median11.751801213961908
deviation-center-line_median2.5016486103582434


other stats
agent_compute-ego0_max0.08979350124965797
agent_compute-ego0_mean0.08979350124965797
agent_compute-ego0_median0.08979350124965797
agent_compute-ego0_min0.08979350124965797
complete-iteration_max0.23620247682067977
complete-iteration_mean0.23620247682067977
complete-iteration_median0.23620247682067977
complete-iteration_min0.23620247682067977
deviation-center-line_max2.5016486103582434
deviation-center-line_mean2.5016486103582434
deviation-center-line_min2.5016486103582434
deviation-heading_max15.271795639733291
deviation-heading_mean15.271795639733291
deviation-heading_median15.271795639733291
deviation-heading_min15.271795639733291
distance-from-start_max1.3686525583985878
distance-from-start_mean1.3686525583985878
distance-from-start_median1.3686525583985878
distance-from-start_min1.3686525583985878
driven_any_max23.066631066888675
driven_any_mean23.066631066888675
driven_any_median23.066631066888675
driven_any_min23.066631066888675
driven_lanedir_consec_max11.751801213961908
driven_lanedir_consec_mean11.751801213961908
driven_lanedir_consec_min11.751801213961908
driven_lanedir_max11.751801213961908
driven_lanedir_mean11.751801213961908
driven_lanedir_median11.751801213961908
driven_lanedir_min11.751801213961908
get_duckie_state_max0.004259610156234754
get_duckie_state_mean0.004259610156234754
get_duckie_state_median0.004259610156234754
get_duckie_state_min0.004259610156234754
get_robot_state_max0.00373420191247894
get_robot_state_mean0.00373420191247894
get_robot_state_median0.00373420191247894
get_robot_state_min0.00373420191247894
get_state_dump_max0.005430852642265784
get_state_dump_mean0.005430852642265784
get_state_dump_median0.005430852642265784
get_state_dump_min0.005430852642265784
get_ui_image_max0.01842628291604124
get_ui_image_mean0.01842628291604124
get_ui_image_median0.01842628291604124
get_ui_image_min0.01842628291604124
in-drivable-lane_max27.599999999999497
in-drivable-lane_mean27.599999999999497
in-drivable-lane_min27.599999999999497
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 23.066631066888675, "get_ui_image": 0.01842628291604124, "step_physics": 0.10065088224450716, "survival_time": 59.99999999999873, "driven_lanedir": 11.751801213961908, "get_state_dump": 0.005430852642265784, "get_robot_state": 0.00373420191247894, "sim_render-ego0": 0.0038590649581769425, "get_duckie_state": 0.004259610156234754, "in-drivable-lane": 27.599999999999497, "deviation-heading": 15.271795639733291, "agent_compute-ego0": 0.08979350124965797, "complete-iteration": 0.23620247682067977, "set_robot_commands": 0.0023952076377519262, "distance-from-start": 1.3686525583985878, "deviation-center-line": 2.5016486103582434, "driven_lanedir_consec": 11.751801213961908, "sim_compute_sim_state": 0.005608154276229261, "sim_compute_performance-ego0": 0.0019561706434975657}}
set_robot_commands_max0.0023952076377519262
set_robot_commands_mean0.0023952076377519262
set_robot_commands_median0.0023952076377519262
set_robot_commands_min0.0023952076377519262
sim_compute_performance-ego0_max0.0019561706434975657
sim_compute_performance-ego0_mean0.0019561706434975657
sim_compute_performance-ego0_median0.0019561706434975657
sim_compute_performance-ego0_min0.0019561706434975657
sim_compute_sim_state_max0.005608154276229261
sim_compute_sim_state_mean0.005608154276229261
sim_compute_sim_state_median0.005608154276229261
sim_compute_sim_state_min0.005608154276229261
sim_render-ego0_max0.0038590649581769425
sim_render-ego0_mean0.0038590649581769425
sim_render-ego0_median0.0038590649581769425
sim_render-ego0_min0.0038590649581769425
simulation-passed1
step_physics_max0.10065088224450716
step_physics_mean0.10065088224450716
step_physics_median0.10065088224450716
step_physics_min0.10065088224450716
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7518614034YU CHENCBC V2, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-010:04:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median19.500000000000146
in-drivable-lane_median4.800000000000064
driven_lanedir_consec_median5.137684271626204
deviation-center-line_median1.211953838825726


other stats
agent_compute-ego0_max0.09554800596993292
agent_compute-ego0_mean0.09554800596993292
agent_compute-ego0_median0.09554800596993292
agent_compute-ego0_min0.09554800596993292
complete-iteration_max0.2882430669291855
complete-iteration_mean0.2882430669291855
complete-iteration_median0.2882430669291855
complete-iteration_min0.2882430669291855
deviation-center-line_max1.211953838825726
deviation-center-line_mean1.211953838825726
deviation-center-line_min1.211953838825726
deviation-heading_max2.972944808294951
deviation-heading_mean2.972944808294951
deviation-heading_median2.972944808294951
deviation-heading_min2.972944808294951
distance-from-start_max2.4930417236136035
distance-from-start_mean2.4930417236136035
distance-from-start_median2.4930417236136035
distance-from-start_min2.4930417236136035
driven_any_max7.1635204036333455
driven_any_mean7.1635204036333455
driven_any_median7.1635204036333455
driven_any_min7.1635204036333455
driven_lanedir_consec_max5.137684271626204
driven_lanedir_consec_mean5.137684271626204
driven_lanedir_consec_min5.137684271626204
driven_lanedir_max5.137684271626204
driven_lanedir_mean5.137684271626204
driven_lanedir_median5.137684271626204
driven_lanedir_min5.137684271626204
get_duckie_state_max0.025022264636690963
get_duckie_state_mean0.025022264636690963
get_duckie_state_median0.025022264636690963
get_duckie_state_min0.025022264636690963
get_robot_state_max0.00385216617828135
get_robot_state_mean0.00385216617828135
get_robot_state_median0.00385216617828135
get_robot_state_min0.00385216617828135
get_state_dump_max0.008822152071901599
get_state_dump_mean0.008822152071901599
get_state_dump_median0.008822152071901599
get_state_dump_min0.008822152071901599
get_ui_image_max0.02044184311576511
get_ui_image_mean0.02044184311576511
get_ui_image_median0.02044184311576511
get_ui_image_min0.02044184311576511
in-drivable-lane_max4.800000000000064
in-drivable-lane_mean4.800000000000064
in-drivable-lane_min4.800000000000064
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 7.1635204036333455, "get_ui_image": 0.02044184311576511, "step_physics": 0.11776481260119193, "survival_time": 19.500000000000146, "driven_lanedir": 5.137684271626204, "get_state_dump": 0.008822152071901599, "get_robot_state": 0.00385216617828135, "sim_render-ego0": 0.003957375236179518, "get_duckie_state": 0.025022264636690963, "in-drivable-lane": 4.800000000000064, "deviation-heading": 2.972944808294951, "agent_compute-ego0": 0.09554800596993292, "complete-iteration": 0.2882430669291855, "set_robot_commands": 0.002349435216020745, "distance-from-start": 2.4930417236136035, "deviation-center-line": 1.211953838825726, "driven_lanedir_consec": 5.137684271626204, "sim_compute_sim_state": 0.008370703748424949, "sim_compute_performance-ego0": 0.0020100582591103164}}
set_robot_commands_max0.002349435216020745
set_robot_commands_mean0.002349435216020745
set_robot_commands_median0.002349435216020745
set_robot_commands_min0.002349435216020745
sim_compute_performance-ego0_max0.0020100582591103164
sim_compute_performance-ego0_mean0.0020100582591103164
sim_compute_performance-ego0_median0.0020100582591103164
sim_compute_performance-ego0_min0.0020100582591103164
sim_compute_sim_state_max0.008370703748424949
sim_compute_sim_state_mean0.008370703748424949
sim_compute_sim_state_median0.008370703748424949
sim_compute_sim_state_min0.008370703748424949
sim_render-ego0_max0.003957375236179518
sim_render-ego0_mean0.003957375236179518
sim_render-ego0_median0.003957375236179518
sim_render-ego0_min0.003957375236179518
simulation-passed1
step_physics_max0.11776481260119193
step_physics_mean0.11776481260119193
step_physics_median0.11776481260119193
step_physics_min0.11776481260119193
survival_time_max19.500000000000146
survival_time_mean19.500000000000146
survival_time_min19.500000000000146
No reset possible
7517914036YU CHENCBC V2 non dropout comparsion, mar28_apr6 bc, mar31_apr6 anomaly aido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:11:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median18.499999999999545
driven_lanedir_consec_median11.065692284845902
deviation-center-line_median3.018077144634942


other stats
agent_compute-ego0_max0.09079389806393284
agent_compute-ego0_mean0.09079389806393284
agent_compute-ego0_median0.09079389806393284
agent_compute-ego0_min0.09079389806393284
complete-iteration_max0.24494403804172385
complete-iteration_mean0.24494403804172385
complete-iteration_median0.24494403804172385
complete-iteration_min0.24494403804172385
deviation-center-line_max3.018077144634942
deviation-center-line_mean3.018077144634942
deviation-center-line_min3.018077144634942
deviation-heading_max15.74407625225176
deviation-heading_mean15.74407625225176
deviation-heading_median15.74407625225176
deviation-heading_min15.74407625225176
distance-from-start_max1.1700801623454622
distance-from-start_mean1.1700801623454622
distance-from-start_median1.1700801623454622
distance-from-start_min1.1700801623454622
driven_any_max17.194344649544984
driven_any_mean17.194344649544984
driven_any_median17.194344649544984
driven_any_min17.194344649544984
driven_lanedir_consec_max11.065692284845902
driven_lanedir_consec_mean11.065692284845902
driven_lanedir_consec_min11.065692284845902
driven_lanedir_max11.065692284845902
driven_lanedir_mean11.065692284845902
driven_lanedir_median11.065692284845902
driven_lanedir_min11.065692284845902
get_duckie_state_max0.004349133255678252
get_duckie_state_mean0.004349133255678252
get_duckie_state_median0.004349133255678252
get_duckie_state_min0.004349133255678252
get_robot_state_max0.003794175599834306
get_robot_state_mean0.003794175599834306
get_robot_state_median0.003794175599834306
get_robot_state_min0.003794175599834306
get_state_dump_max0.005493007631325702
get_state_dump_mean0.005493007631325702
get_state_dump_median0.005493007631325702
get_state_dump_min0.005493007631325702
get_ui_image_max0.019125686299294654
get_ui_image_mean0.019125686299294654
get_ui_image_median0.019125686299294654
get_ui_image_min0.019125686299294654
in-drivable-lane_max18.499999999999545
in-drivable-lane_mean18.499999999999545
in-drivable-lane_min18.499999999999545
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 17.194344649544984, "get_ui_image": 0.019125686299294654, "step_physics": 0.1072850513220032, "survival_time": 59.99999999999873, "driven_lanedir": 11.065692284845902, "get_state_dump": 0.005493007631325702, "get_robot_state": 0.003794175599834306, "sim_render-ego0": 0.0039366309986225674, "get_duckie_state": 0.004349133255678252, "in-drivable-lane": 18.499999999999545, "deviation-heading": 15.74407625225176, "agent_compute-ego0": 0.09079389806393284, "complete-iteration": 0.24494403804172385, "set_robot_commands": 0.0023890698978446304, "distance-from-start": 1.1700801623454622, "deviation-center-line": 3.018077144634942, "driven_lanedir_consec": 11.065692284845902, "sim_compute_sim_state": 0.005717871687394395, "sim_compute_performance-ego0": 0.0019709684767393546}}
set_robot_commands_max0.0023890698978446304
set_robot_commands_mean0.0023890698978446304
set_robot_commands_median0.0023890698978446304
set_robot_commands_min0.0023890698978446304
sim_compute_performance-ego0_max0.0019709684767393546
sim_compute_performance-ego0_mean0.0019709684767393546
sim_compute_performance-ego0_median0.0019709684767393546
sim_compute_performance-ego0_min0.0019709684767393546
sim_compute_sim_state_max0.005717871687394395
sim_compute_sim_state_mean0.005717871687394395
sim_compute_sim_state_median0.005717871687394395
sim_compute_sim_state_min0.005717871687394395
sim_render-ego0_max0.0039366309986225674
sim_render-ego0_mean0.0039366309986225674
sim_render-ego0_median0.0039366309986225674
sim_render-ego0_min0.0039366309986225674
simulation-passed1
step_physics_max0.1072850513220032
step_physics_mean0.1072850513220032
step_physics_median0.1072850513220032
step_physics_min0.1072850513220032
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7517613504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-3of4failednogpu-production-spot-0-010:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7516513504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-3of4failednogpu-production-spot-0-010:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7516413504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-3of4host-errornogpu-production-spot-0-010:00:55
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || 	 [[{{node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform}}]]
              || Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              ||
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 381, in _initialize_loss
              ||     self._sess.run(tf.global_variables_initializer())
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || 	 [[node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:83) ]]
              || Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              ||
              ||
              || Original stack trace for 'default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              ||     self.model = ModelCatalog.get_model_v2(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              ||     return wrapper(obs_space, action_space, num_outputs, model_config,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 83, in __init__
              ||     last_layer = tf.keras.layers.Conv2D(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              ||     self.build(input_shapes)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              ||     self.kernel = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              ||     return random_ops.random_uniform(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              ||     return target(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 301, in random_uniform
              ||     result = gen_random_ops.random_uniform(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 742, in random_uniform
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || | 	 [[{{node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform}}]]
              || | Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              || |
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 381, in _initialize_loss
              || |     self._sess.run(tf.global_variables_initializer())
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || | 	 [[node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:83) ]]
              || | Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              || |
              || |
              || | Original stack trace for 'default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              || |     self.model = ModelCatalog.get_model_v2(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              || |     return wrapper(obs_space, action_space, num_outputs, model_config,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 83, in __init__
              || |     last_layer = tf.keras.layers.Conv2D(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              || |     self.build(input_shapes)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              || |     self.kernel = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              || |     return random_ops.random_uniform(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              || |     return target(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 301, in random_uniform
              || |     result = gen_random_ops.random_uniform(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 742, in random_uniform
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 277, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || 	 [[{{node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform}}]]
              || Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              ||
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 381, in _initialize_loss
              ||     self._sess.run(tf.global_variables_initializer())
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || 	 [[node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:83) ]]
              || Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              ||
              ||
              || Original stack trace for 'default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              ||     self.model = ModelCatalog.get_model_v2(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              ||     return wrapper(obs_space, action_space, num_outputs, model_config,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 83, in __init__
              ||     last_layer = tf.keras.layers.Conv2D(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              ||     self.build(input_shapes)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              ||     self.kernel = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              ||     return random_ops.random_uniform(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              ||     return target(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 301, in random_uniform
              ||     result = gen_random_ops.random_uniform(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 742, in random_uniform
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || | 	 [[{{node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform}}]]
              || | Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              || |
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 381, in _initialize_loss
              || |     self._sess.run(tf.global_variables_initializer())
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[11,11,32,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
              || | 	 [[node default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:83) ]]
              || | Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
              || |
              || |
              || | Original stack trace for 'default_policy/conv_value_3/kernel/Initializer/random_uniform/RandomUniform':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              || |     self.model = ModelCatalog.get_model_v2(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              || |     return wrapper(obs_space, action_space, num_outputs, model_config,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 83, in __init__
              || |     last_layer = tf.keras.layers.Conv2D(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              || |     self.build(input_shapes)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              || |     self.kernel = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              || |     return random_ops.random_uniform(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              || |     return target(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 301, in random_uniform
              || |     result = gen_random_ops.random_uniform(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_random_ops.py", line 742, in random_uniform
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7516113504AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LF-sim-testingsim-1of4failednogpu-production-spot-0-010:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515813511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:01:03
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [] and type float
              || 	 [[{{node default_policy/conv3/kernel/Initializer/random_uniform/min}}]]
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 322, in _initialize_loss
              ||     self._sess.run(tf.global_variables_initializer())
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [] and type float
              || 	 [[node default_policy/conv3/kernel/Initializer/random_uniform/min (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:49) ]]
              ||
              || Original stack trace for 'default_policy/conv3/kernel/Initializer/random_uniform/min':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              ||     self.model = ModelCatalog.get_model_v2(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              ||     return wrapper(obs_space, action_space, num_outputs, model_config,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 49, in __init__
              ||     last_layer = tf.keras.layers.Conv2D(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              ||     self._maybe_build(inputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              ||     self.build(input_shapes)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              ||     self.kernel = self.add_weight(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              ||     variable = self._add_variable_with_custom_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              ||     new_variable = getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              ||     return tf_variables.VariableV1(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              ||     return cls._variable_v1_call(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              ||     return previous_getter(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              ||     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              ||     return resource_variable_ops.ResourceVariable(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              ||     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              ||     self._init_from_args(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              ||     initial_value() if init_from_fn else initial_value,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              ||     return random_ops.random_uniform(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              ||     return target(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 294, in random_uniform
              ||     minval = ops.convert_to_tensor(minval, dtype=dtype, name="min")
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1499, in convert_to_tensor
              ||     ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_conversion_registry.py", line 52, in _default_conversion_function
              ||     return constant_op.constant(value, dtype, name=name)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 263, in constant
              ||     return _constant_impl(value, dtype, shape, name, verify_shape=False,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 285, in _constant_impl
              ||     const_tensor = g._create_op_internal(  # pylint: disable=protected-access
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [] and type float
              || | 	 [[{{node default_policy/conv3/kernel/Initializer/random_uniform/min}}]]
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 322, in _initialize_loss
              || |     self._sess.run(tf.global_variables_initializer())
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor of shape [] and type float
              || | 	 [[node default_policy/conv3/kernel/Initializer/random_uniform/min (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:49) ]]
              || |
              || | Original stack trace for 'default_policy/conv3/kernel/Initializer/random_uniform/min':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 147, in __init__
              || |     self.model = ModelCatalog.get_model_v2(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/catalog.py", line 347, in get_model_v2
              || |     return wrapper(obs_space, action_space, num_outputs, model_config,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 49, in __init__
              || |     last_layer = tf.keras.layers.Conv2D(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 757, in __call__
              || |     self._maybe_build(inputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2098, in _maybe_build
              || |     self.build(input_shapes)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 197, in build
              || |     self.kernel = self.add_weight(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 431, in add_weight
              || |     variable = self._add_variable_with_custom_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/tracking/base.py", line 745, in _add_variable_with_custom_getter
              || |     new_variable = getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 133, in make_variable
              || |     return tf_variables.VariableV1(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
              || |     return cls._variable_v1_call(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 206, in _variable_v1_call
              || |     return previous_getter(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
              || |     previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variable_scope.py", line 2583, in default_variable_creator
              || |     return resource_variable_ops.ResourceVariable(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
              || |     return super(VariableMetaclass, cls).__call__(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1507, in __init__
              || |     self._init_from_args(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
              || |     initial_value() if init_from_fn else initial_value,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/init_ops.py", line 518, in __call__
              || |     return random_ops.random_uniform(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
              || |     return target(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/random_ops.py", line 294, in random_uniform
              || |     minval = ops.convert_to_tensor(minval, dtype=dtype, name="min")
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1499, in convert_to_tensor
              || |     ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/tensor_conversion_registry.py", line 52, in _default_conversion_function
              || |     return constant_op.constant(value, dtype, name=name)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 263, in constant
              || |     return _constant_impl(value, dtype, shape, name, verify_shape=False,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 285, in _constant_impl
              || |     const_tensor = g._create_op_internal(  # pylint: disable=protected-access
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515413511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515313511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7515113511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-2of4failednogpu-production-spot-0-010:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7514913511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-2of4failednogpu-production-spot-0-010:00:42
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7514513511AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFP-sim-validationsim-2of4failednogpu-production-spot-0-010:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7514213513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-1of4failednogpu-production-spot-0-010:01:00
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7514013513AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV-sim-validationsim-1of4failednogpu-production-spot-0-010:01:01
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7513513518AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ίreal-v1.0-3091-310aido-LFV_multi-sim-validation402failedyesgpu-production-spot-0-010:02:50
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 219, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7513213534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-0of4failednogpu-production-spot-0-010:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7513013534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-0of4failednogpu-production-spot-0-010:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7512613541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-1of4failednogpu-production-spot-0-010:00:44
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7512313534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-010:00:40
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7511613534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-010:00:41
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7511413534AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-testingsim-2of4failednogpu-production-spot-0-010:00:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7511113541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:00:39
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7510713541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:00:43
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7510413541AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFP-sim-validationsim-3of4failednogpu-production-spot-0-010:00:44
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7510113543AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFV-sim-validationsim-2of4failednogpu-production-spot-0-010:01:11
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7509713543AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LFV-sim-validationsim-2of4failednogpu-production-spot-0-010:01:14
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 278, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7508714013YU CHENCBC Net v2 test - APR 6 anomaly + mar 28 bcaido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:11:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median15.309798022693087
survival_time_median59.99999999999873
deviation-center-line_median2.6183747197873246
in-drivable-lane_median21.34999999999968


other stats
agent_compute-ego0_max0.09603780632114332
agent_compute-ego0_mean0.09603780632114332
agent_compute-ego0_median0.09603780632114332
agent_compute-ego0_min0.09603780632114332
complete-iteration_max0.281465676305296
complete-iteration_mean0.281465676305296
complete-iteration_median0.281465676305296
complete-iteration_min0.281465676305296
deviation-center-line_max2.6183747197873246
deviation-center-line_mean2.6183747197873246
deviation-center-line_min2.6183747197873246
deviation-heading_max13.160323319645457
deviation-heading_mean13.160323319645457
deviation-heading_median13.160323319645457
deviation-heading_min13.160323319645457
distance-from-start_max3.5785895753665695
distance-from-start_mean3.5785895753665695
distance-from-start_median3.5785895753665695
distance-from-start_min3.5785895753665695
driven_any_max25.209563277255803
driven_any_mean25.209563277255803
driven_any_median25.209563277255803
driven_any_min25.209563277255803
driven_lanedir_consec_max15.309798022693087
driven_lanedir_consec_mean15.309798022693087
driven_lanedir_consec_min15.309798022693087
driven_lanedir_max15.309798022693087
driven_lanedir_mean15.309798022693087
driven_lanedir_median15.309798022693087
driven_lanedir_min15.309798022693087
get_duckie_state_max1.4698177849025551e-06
get_duckie_state_mean1.4698177849025551e-06
get_duckie_state_median1.4698177849025551e-06
get_duckie_state_min1.4698177849025551e-06
get_robot_state_max0.0038970290969353137
get_robot_state_mean0.0038970290969353137
get_robot_state_median0.0038970290969353137
get_robot_state_min0.0038970290969353137
get_state_dump_max0.004881258511126389
get_state_dump_mean0.004881258511126389
get_state_dump_median0.004881258511126389
get_state_dump_min0.004881258511126389
get_ui_image_max0.02354860424896164
get_ui_image_mean0.02354860424896164
get_ui_image_median0.02354860424896164
get_ui_image_min0.02354860424896164
in-drivable-lane_max21.34999999999968
in-drivable-lane_mean21.34999999999968
in-drivable-lane_min21.34999999999968
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 25.209563277255803, "get_ui_image": 0.02354860424896164, "step_physics": 0.13276312770097082, "survival_time": 59.99999999999873, "driven_lanedir": 15.309798022693087, "get_state_dump": 0.004881258511126389, "get_robot_state": 0.0038970290969353137, "sim_render-ego0": 0.004120136677077371, "get_duckie_state": 1.4698177849025551e-06, "in-drivable-lane": 21.34999999999968, "deviation-heading": 13.160323319645457, "agent_compute-ego0": 0.09603780632114332, "complete-iteration": 0.281465676305296, "set_robot_commands": 0.0024605003820668647, "distance-from-start": 3.5785895753665695, "deviation-center-line": 2.6183747197873246, "driven_lanedir_consec": 15.309798022693087, "sim_compute_sim_state": 0.011566663761123034, "sim_compute_performance-ego0": 0.002085818537665247}}
set_robot_commands_max0.0024605003820668647
set_robot_commands_mean0.0024605003820668647
set_robot_commands_median0.0024605003820668647
set_robot_commands_min0.0024605003820668647
sim_compute_performance-ego0_max0.002085818537665247
sim_compute_performance-ego0_mean0.002085818537665247
sim_compute_performance-ego0_median0.002085818537665247
sim_compute_performance-ego0_min0.002085818537665247
sim_compute_sim_state_max0.011566663761123034
sim_compute_sim_state_mean0.011566663761123034
sim_compute_sim_state_median0.011566663761123034
sim_compute_sim_state_min0.011566663761123034
sim_render-ego0_max0.004120136677077371
sim_render-ego0_mean0.004120136677077371
sim_render-ego0_median0.004120136677077371
sim_render-ego0_min0.004120136677077371
simulation-passed1
step_physics_max0.13276312770097082
step_physics_mean0.13276312770097082
step_physics_median0.13276312770097082
step_physics_min0.13276312770097082
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7507814031YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-2of4successnogpu-production-spot-0-010:09:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median14.882966376484926
survival_time_median59.99999999999873
deviation-center-line_median2.8058411884212004
in-drivable-lane_median22.449999999999665


other stats
agent_compute-ego0_max0.09163369485281786
agent_compute-ego0_mean0.09163369485281786
agent_compute-ego0_median0.09163369485281786
agent_compute-ego0_min0.09163369485281786
complete-iteration_max0.23235842329973383
complete-iteration_mean0.23235842329973383
complete-iteration_median0.23235842329973383
complete-iteration_min0.23235842329973383
deviation-center-line_max2.8058411884212004
deviation-center-line_mean2.8058411884212004
deviation-center-line_min2.8058411884212004
deviation-heading_max16.4700267620946
deviation-heading_mean16.4700267620946
deviation-heading_median16.4700267620946
deviation-heading_min16.4700267620946
distance-from-start_max1.257493272073518
distance-from-start_mean1.257493272073518
distance-from-start_median1.257493272073518
distance-from-start_min1.257493272073518
driven_any_max25.26344841677143
driven_any_mean25.26344841677143
driven_any_median25.26344841677143
driven_any_min25.26344841677143
driven_lanedir_consec_max14.882966376484926
driven_lanedir_consec_mean14.882966376484926
driven_lanedir_consec_min14.882966376484926
driven_lanedir_max14.882966376484926
driven_lanedir_mean14.882966376484926
driven_lanedir_median14.882966376484926
driven_lanedir_min14.882966376484926
get_duckie_state_max1.3064385254516094e-06
get_duckie_state_mean1.3064385254516094e-06
get_duckie_state_median1.3064385254516094e-06
get_duckie_state_min1.3064385254516094e-06
get_robot_state_max0.0037812620872859654
get_robot_state_mean0.0037812620872859654
get_robot_state_median0.0037812620872859654
get_robot_state_min0.0037812620872859654
get_state_dump_max0.004753605908497883
get_state_dump_mean0.004753605908497883
get_state_dump_median0.004753605908497883
get_state_dump_min0.004753605908497883
get_ui_image_max0.018481509870136907
get_ui_image_mean0.018481509870136907
get_ui_image_median0.018481509870136907
get_ui_image_min0.018481509870136907
in-drivable-lane_max22.449999999999665
in-drivable-lane_mean22.449999999999665
in-drivable-lane_min22.449999999999665
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 25.26344841677143, "get_ui_image": 0.018481509870136907, "step_physics": 0.09981764921240764, "survival_time": 59.99999999999873, "driven_lanedir": 14.882966376484926, "get_state_dump": 0.004753605908497883, "get_robot_state": 0.0037812620872859654, "sim_render-ego0": 0.0038838995981970792, "get_duckie_state": 1.3064385254516094e-06, "in-drivable-lane": 22.449999999999665, "deviation-heading": 16.4700267620946, "agent_compute-ego0": 0.09163369485281786, "complete-iteration": 0.23235842329973383, "set_robot_commands": 0.002374214693271151, "distance-from-start": 1.257493272073518, "deviation-center-line": 2.8058411884212004, "driven_lanedir_consec": 14.882966376484926, "sim_compute_sim_state": 0.0055947674998236534, "sim_compute_performance-ego0": 0.0019485543510697463}}
set_robot_commands_max0.002374214693271151
set_robot_commands_mean0.002374214693271151
set_robot_commands_median0.002374214693271151
set_robot_commands_min0.002374214693271151
sim_compute_performance-ego0_max0.0019485543510697463
sim_compute_performance-ego0_mean0.0019485543510697463
sim_compute_performance-ego0_median0.0019485543510697463
sim_compute_performance-ego0_min0.0019485543510697463
sim_compute_sim_state_max0.0055947674998236534
sim_compute_sim_state_mean0.0055947674998236534
sim_compute_sim_state_median0.0055947674998236534
sim_compute_sim_state_min0.0055947674998236534
sim_render-ego0_max0.0038838995981970792
sim_render-ego0_mean0.0038838995981970792
sim_render-ego0_median0.0038838995981970792
sim_render-ego0_min0.0038838995981970792
simulation-passed1
step_physics_max0.09981764921240764
step_physics_mean0.09981764921240764
step_physics_median0.09981764921240764
step_physics_min0.09981764921240764
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7503314035YU CHENCBC V2 non dropout comparsion, mar28_apr6 bc, mar31_apr6 anomaly aido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:11:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median17.30893226414933
survival_time_median59.99999999999873
deviation-center-line_median3.9783547283064062
in-drivable-lane_median12.049999999999535


other stats
agent_compute-ego0_max0.09415403095312062
agent_compute-ego0_mean0.09415403095312062
agent_compute-ego0_median0.09415403095312062
agent_compute-ego0_min0.09415403095312062
complete-iteration_max0.2548579969175849
complete-iteration_mean0.2548579969175849
complete-iteration_median0.2548579969175849
complete-iteration_min0.2548579969175849
deviation-center-line_max3.9783547283064062
deviation-center-line_mean3.9783547283064062
deviation-center-line_min3.9783547283064062
deviation-heading_max13.16576958349228
deviation-heading_mean13.16576958349228
deviation-heading_median13.16576958349228
deviation-heading_min13.16576958349228
distance-from-start_max2.8923263218840445
distance-from-start_mean2.8923263218840445
distance-from-start_median2.8923263218840445
distance-from-start_min2.8923263218840445
driven_any_max22.384968454591352
driven_any_mean22.384968454591352
driven_any_median22.384968454591352
driven_any_min22.384968454591352
driven_lanedir_consec_max17.30893226414933
driven_lanedir_consec_mean17.30893226414933
driven_lanedir_consec_min17.30893226414933
driven_lanedir_max17.30893226414933
driven_lanedir_mean17.30893226414933
driven_lanedir_median17.30893226414933
driven_lanedir_min17.30893226414933
get_duckie_state_max1.3947784652519384e-06
get_duckie_state_mean1.3947784652519384e-06
get_duckie_state_median1.3947784652519384e-06
get_duckie_state_min1.3947784652519384e-06
get_robot_state_max0.0038657682722156783
get_robot_state_mean0.0038657682722156783
get_robot_state_median0.0038657682722156783
get_robot_state_min0.0038657682722156783
get_state_dump_max0.004829575874525542
get_state_dump_mean0.004829575874525542
get_state_dump_median0.004829575874525542
get_state_dump_min0.004829575874525542
get_ui_image_max0.020204207978578137
get_ui_image_mean0.020204207978578137
get_ui_image_median0.020204207978578137
get_ui_image_min0.020204207978578137
in-drivable-lane_max12.049999999999535
in-drivable-lane_mean12.049999999999535
in-drivable-lane_min12.049999999999535
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 22.384968454591352, "get_ui_image": 0.020204207978578137, "step_physics": 0.11473732228878634, "survival_time": 59.99999999999873, "driven_lanedir": 17.30893226414933, "get_state_dump": 0.004829575874525542, "get_robot_state": 0.0038657682722156783, "sim_render-ego0": 0.004064132728544898, "get_duckie_state": 1.3947784652519384e-06, "in-drivable-lane": 12.049999999999535, "deviation-heading": 13.16576958349228, "agent_compute-ego0": 0.09415403095312062, "complete-iteration": 0.2548579969175849, "set_robot_commands": 0.002412524052603258, "distance-from-start": 2.8923263218840445, "deviation-center-line": 3.9783547283064062, "driven_lanedir_consec": 17.30893226414933, "sim_compute_sim_state": 0.008400416592574933, "sim_compute_performance-ego0": 0.002088028425777286}}
set_robot_commands_max0.002412524052603258
set_robot_commands_mean0.002412524052603258
set_robot_commands_median0.002412524052603258
set_robot_commands_min0.002412524052603258
sim_compute_performance-ego0_max0.002088028425777286
sim_compute_performance-ego0_mean0.002088028425777286
sim_compute_performance-ego0_median0.002088028425777286
sim_compute_performance-ego0_min0.002088028425777286
sim_compute_sim_state_max0.008400416592574933
sim_compute_sim_state_mean0.008400416592574933
sim_compute_sim_state_median0.008400416592574933
sim_compute_sim_state_min0.008400416592574933
sim_render-ego0_max0.004064132728544898
sim_render-ego0_mean0.004064132728544898
sim_render-ego0_median0.004064132728544898
sim_render-ego0_min0.004064132728544898
simulation-passed1
step_physics_max0.11473732228878634
step_physics_mean0.11473732228878634
step_physics_median0.11473732228878634
step_physics_min0.11473732228878634
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7497613535AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:09:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.54841049203641
survival_time_median59.99999999999873
deviation-center-line_median2.4212303095611456
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.013708182318224498
agent_compute-ego0_mean0.013708182318224498
agent_compute-ego0_median0.013708182318224498
agent_compute-ego0_min0.013708182318224498
complete-iteration_max0.1969039035180923
complete-iteration_mean0.1969039035180923
complete-iteration_median0.1969039035180923
complete-iteration_min0.1969039035180923
deviation-center-line_max2.4212303095611456
deviation-center-line_mean2.4212303095611456
deviation-center-line_min2.4212303095611456
deviation-heading_max8.806098322674401
deviation-heading_mean8.806098322674401
deviation-heading_median8.806098322674401
deviation-heading_min8.806098322674401
distance-from-start_max3.4650768732675967
distance-from-start_mean3.4650768732675967
distance-from-start_median3.4650768732675967
distance-from-start_min3.4650768732675967
driven_any_max27.98561973109893
driven_any_mean27.98561973109893
driven_any_median27.98561973109893
driven_any_min27.98561973109893
driven_lanedir_consec_max27.54841049203641
driven_lanedir_consec_mean27.54841049203641
driven_lanedir_consec_min27.54841049203641
driven_lanedir_max27.54841049203641
driven_lanedir_mean27.54841049203641
driven_lanedir_median27.54841049203641
driven_lanedir_min27.54841049203641
get_duckie_state_max1.6641656524632794e-06
get_duckie_state_mean1.6641656524632794e-06
get_duckie_state_median1.6641656524632794e-06
get_duckie_state_min1.6641656524632794e-06
get_robot_state_max0.003676779561991696
get_robot_state_mean0.003676779561991696
get_robot_state_median0.003676779561991696
get_robot_state_min0.003676779561991696
get_state_dump_max0.004662686045421947
get_state_dump_mean0.004662686045421947
get_state_dump_median0.004662686045421947
get_state_dump_min0.004662686045421947
get_ui_image_max0.022723698000626003
get_ui_image_mean0.022723698000626003
get_ui_image_median0.022723698000626003
get_ui_image_min0.022723698000626003
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 27.98561973109893, "get_ui_image": 0.022723698000626003, "step_physics": 0.13261041672998025, "survival_time": 59.99999999999873, "driven_lanedir": 27.54841049203641, "get_state_dump": 0.004662686045421947, "get_robot_state": 0.003676779561991696, "sim_render-ego0": 0.003900496786976734, "get_duckie_state": 1.6641656524632794e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.806098322674401, "agent_compute-ego0": 0.013708182318224498, "complete-iteration": 0.1969039035180923, "set_robot_commands": 0.002241194397087002, "distance-from-start": 3.4650768732675967, "deviation-center-line": 2.4212303095611456, "driven_lanedir_consec": 27.54841049203641, "sim_compute_sim_state": 0.01134851909894729, "sim_compute_performance-ego0": 0.0019472864247877136}}
set_robot_commands_max0.002241194397087002
set_robot_commands_mean0.002241194397087002
set_robot_commands_median0.002241194397087002
set_robot_commands_min0.002241194397087002
sim_compute_performance-ego0_max0.0019472864247877136
sim_compute_performance-ego0_mean0.0019472864247877136
sim_compute_performance-ego0_median0.0019472864247877136
sim_compute_performance-ego0_min0.0019472864247877136
sim_compute_sim_state_max0.01134851909894729
sim_compute_sim_state_mean0.01134851909894729
sim_compute_sim_state_median0.01134851909894729
sim_compute_sim_state_min0.01134851909894729
sim_render-ego0_max0.003900496786976734
sim_render-ego0_mean0.003900496786976734
sim_render-ego0_median0.003900496786976734
sim_render-ego0_min0.003900496786976734
simulation-passed1
step_physics_max0.13261041672998025
step_physics_mean0.13261041672998025
step_physics_median0.13261041672998025
step_physics_min0.13261041672998025
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7493913565MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LF-sim-testingsim-2of4successnogpu-production-spot-0-010:10:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.834107634155927
survival_time_median59.99999999999873
deviation-center-line_median2.4803816061036157
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.04827822336646341
agent_compute-ego0_mean0.04827822336646341
agent_compute-ego0_median0.04827822336646341
agent_compute-ego0_min0.04827822336646341
complete-iteration_max0.2016407301979795
complete-iteration_mean0.2016407301979795
complete-iteration_median0.2016407301979795
complete-iteration_min0.2016407301979795
deviation-center-line_max2.4803816061036157
deviation-center-line_mean2.4803816061036157
deviation-center-line_min2.4803816061036157
deviation-heading_max9.740944896508552
deviation-heading_mean9.740944896508552
deviation-heading_median9.740944896508552
deviation-heading_min9.740944896508552
distance-from-start_max1.0921686597785725
distance-from-start_mean1.0921686597785725
distance-from-start_median1.0921686597785725
distance-from-start_min1.0921686597785725
driven_any_max28.233293956343964
driven_any_mean28.233293956343964
driven_any_median28.233293956343964
driven_any_min28.233293956343964
driven_lanedir_consec_max27.834107634155927
driven_lanedir_consec_mean27.834107634155927
driven_lanedir_consec_min27.834107634155927
driven_lanedir_max27.834107634155927
driven_lanedir_mean27.834107634155927
driven_lanedir_median27.834107634155927
driven_lanedir_min27.834107634155927
get_duckie_state_max1.393587364940024e-06
get_duckie_state_mean1.393587364940024e-06
get_duckie_state_median1.393587364940024e-06
get_duckie_state_min1.393587364940024e-06
get_robot_state_max0.003954407575227736
get_robot_state_mean0.003954407575227736
get_robot_state_median0.003954407575227736
get_robot_state_min0.003954407575227736
get_state_dump_max0.004954443883141511
get_state_dump_mean0.004954443883141511
get_state_dump_median0.004954443883141511
get_state_dump_min0.004954443883141511
get_ui_image_max0.01941576587667473
get_ui_image_mean0.01941576587667473
get_ui_image_median0.01941576587667473
get_ui_image_min0.01941576587667473
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 28.233293956343964, "get_ui_image": 0.01941576587667473, "step_physics": 0.11049599830157354, "survival_time": 59.99999999999873, "driven_lanedir": 27.834107634155927, "get_state_dump": 0.004954443883141511, "get_robot_state": 0.003954407575227736, "sim_render-ego0": 0.004060633275828492, "get_duckie_state": 1.393587364940024e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.740944896508552, "agent_compute-ego0": 0.04827822336646341, "complete-iteration": 0.2016407301979795, "set_robot_commands": 0.002414669025748298, "distance-from-start": 1.0921686597785725, "deviation-center-line": 2.4803816061036157, "driven_lanedir_consec": 27.834107634155927, "sim_compute_sim_state": 0.005888819992294121, "sim_compute_performance-ego0": 0.002089579635416836}}
set_robot_commands_max0.002414669025748298
set_robot_commands_mean0.002414669025748298
set_robot_commands_median0.002414669025748298
set_robot_commands_min0.002414669025748298
sim_compute_performance-ego0_max0.002089579635416836
sim_compute_performance-ego0_mean0.002089579635416836
sim_compute_performance-ego0_median0.002089579635416836
sim_compute_performance-ego0_min0.002089579635416836
sim_compute_sim_state_max0.005888819992294121
sim_compute_sim_state_mean0.005888819992294121
sim_compute_sim_state_median0.005888819992294121
sim_compute_sim_state_min0.005888819992294121
sim_render-ego0_max0.004060633275828492
sim_render-ego0_mean0.004060633275828492
sim_render-ego0_median0.004060633275828492
sim_render-ego0_min0.004060633275828492
simulation-passed1
step_physics_max0.11049599830157354
step_physics_mean0.11049599830157354
step_physics_median0.11049599830157354
step_physics_min0.11049599830157354
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7491413567MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-full-sim-validationsim-1of4successnogpu-production-spot-0-010:05:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median24.750000000000217
in-drivable-lane_median1.85000000000002
driven_lanedir_consec_median10.16367206096551
deviation-center-line_median1.608733565799182


other stats
agent_compute-ego0_max0.04549488665596131
agent_compute-ego0_mean0.04549488665596131
agent_compute-ego0_median0.04549488665596131
agent_compute-ego0_min0.04549488665596131
complete-iteration_max0.23077553751007204
complete-iteration_mean0.23077553751007204
complete-iteration_median0.23077553751007204
complete-iteration_min0.23077553751007204
deviation-center-line_max1.608733565799182
deviation-center-line_mean1.608733565799182
deviation-center-line_min1.608733565799182
deviation-heading_max5.327243279941921
deviation-heading_mean5.327243279941921
deviation-heading_median5.327243279941921
deviation-heading_min5.327243279941921
distance-from-start_max2.495550354023308
distance-from-start_mean2.495550354023308
distance-from-start_median2.495550354023308
distance-from-start_min2.495550354023308
driven_any_max11.8173154514203
driven_any_mean11.8173154514203
driven_any_median11.8173154514203
driven_any_min11.8173154514203
driven_lanedir_consec_max10.16367206096551
driven_lanedir_consec_mean10.16367206096551
driven_lanedir_consec_min10.16367206096551
driven_lanedir_max10.448160302331326
driven_lanedir_mean10.448160302331326
driven_lanedir_median10.448160302331326
driven_lanedir_min10.448160302331326
get_duckie_state_max1.3747522907872357e-06
get_duckie_state_mean1.3747522907872357e-06
get_duckie_state_median1.3747522907872357e-06
get_duckie_state_min1.3747522907872357e-06
get_robot_state_max0.003797237911532002
get_robot_state_mean0.003797237911532002
get_robot_state_median0.003797237911532002
get_robot_state_min0.003797237911532002
get_state_dump_max0.0047520365445844585
get_state_dump_mean0.0047520365445844585
get_state_dump_median0.0047520365445844585
get_state_dump_min0.0047520365445844585
get_ui_image_max0.024205540457079486
get_ui_image_mean0.024205540457079486
get_ui_image_median0.024205540457079486
get_ui_image_min0.024205540457079486
in-drivable-lane_max1.85000000000002
in-drivable-lane_mean1.85000000000002
in-drivable-lane_min1.85000000000002
per-episodes
details{"LFI-full-udem1-000-ego0": {"driven_any": 11.8173154514203, "get_ui_image": 0.024205540457079486, "step_physics": 0.13320350310494822, "survival_time": 24.750000000000217, "driven_lanedir": 10.448160302331326, "get_state_dump": 0.0047520365445844585, "get_robot_state": 0.003797237911532002, "sim_render-ego0": 0.003984016276174976, "get_duckie_state": 1.3747522907872357e-06, "in-drivable-lane": 1.85000000000002, "deviation-heading": 5.327243279941921, "agent_compute-ego0": 0.04549488665596131, "complete-iteration": 0.23077553751007204, "set_robot_commands": 0.0023902361431429463, "distance-from-start": 2.495550354023308, "deviation-center-line": 1.608733565799182, "driven_lanedir_consec": 10.16367206096551, "sim_compute_sim_state": 0.010813464080133743, "sim_compute_performance-ego0": 0.002040567897981213}}
set_robot_commands_max0.0023902361431429463
set_robot_commands_mean0.0023902361431429463
set_robot_commands_median0.0023902361431429463
set_robot_commands_min0.0023902361431429463
sim_compute_performance-ego0_max0.002040567897981213
sim_compute_performance-ego0_mean0.002040567897981213
sim_compute_performance-ego0_median0.002040567897981213
sim_compute_performance-ego0_min0.002040567897981213
sim_compute_sim_state_max0.010813464080133743
sim_compute_sim_state_mean0.010813464080133743
sim_compute_sim_state_median0.010813464080133743
sim_compute_sim_state_min0.010813464080133743
sim_render-ego0_max0.003984016276174976
sim_render-ego0_mean0.003984016276174976
sim_render-ego0_median0.003984016276174976
sim_render-ego0_min0.003984016276174976
simulation-passed1
step_physics_max0.13320350310494822
step_physics_mean0.13320350310494822
step_physics_median0.13320350310494822
step_physics_min0.13320350310494822
survival_time_max24.750000000000217
survival_time_mean24.750000000000217
survival_time_min24.750000000000217
No reset possible
7490013567MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-full-sim-validationsim-1of4successnogpu-production-spot-0-010:02:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.499999999999981
in-drivable-lane_median0.0
driven_lanedir_consec_median3.0015747541101776
deviation-center-line_median0.5705091390843485


other stats
agent_compute-ego0_max0.04439958831332377
agent_compute-ego0_mean0.04439958831332377
agent_compute-ego0_median0.04439958831332377
agent_compute-ego0_min0.04439958831332377
complete-iteration_max0.2291058467713413
complete-iteration_mean0.2291058467713413
complete-iteration_median0.2291058467713413
complete-iteration_min0.2291058467713413
deviation-center-line_max0.5705091390843485
deviation-center-line_mean0.5705091390843485
deviation-center-line_min0.5705091390843485
deviation-heading_max2.278908405991009
deviation-heading_mean2.278908405991009
deviation-heading_median2.278908405991009
deviation-heading_min2.278908405991009
distance-from-start_max1.5984998154929746
distance-from-start_mean1.5984998154929746
distance-from-start_median1.5984998154929746
distance-from-start_min1.5984998154929746
driven_any_max3.1612992398797424
driven_any_mean3.1612992398797424
driven_any_median3.1612992398797424
driven_any_min3.1612992398797424
driven_lanedir_consec_max3.0015747541101776
driven_lanedir_consec_mean3.0015747541101776
driven_lanedir_consec_min3.0015747541101776
driven_lanedir_max3.001576423074943
driven_lanedir_mean3.001576423074943
driven_lanedir_median3.001576423074943
driven_lanedir_min3.001576423074943
get_duckie_state_max1.9705058723096025e-06
get_duckie_state_mean1.9705058723096025e-06
get_duckie_state_median1.9705058723096025e-06
get_duckie_state_min1.9705058723096025e-06
get_robot_state_max0.00375812416834547
get_robot_state_mean0.00375812416834547
get_robot_state_median0.00375812416834547
get_robot_state_min0.00375812416834547
get_state_dump_max0.004738545575678744
get_state_dump_mean0.004738545575678744
get_state_dump_median0.004738545575678744
get_state_dump_min0.004738545575678744
get_ui_image_max0.024716988304592916
get_ui_image_mean0.024716988304592916
get_ui_image_median0.024716988304592916
get_ui_image_min0.024716988304592916
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFI-full-udem1-000-ego0": {"driven_any": 3.1612992398797424, "get_ui_image": 0.024716988304592916, "step_physics": 0.13239677536566527, "survival_time": 7.499999999999981, "driven_lanedir": 3.001576423074943, "get_state_dump": 0.004738545575678744, "get_robot_state": 0.00375812416834547, "sim_render-ego0": 0.003937774936094979, "get_duckie_state": 1.9705058723096025e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.278908405991009, "agent_compute-ego0": 0.04439958831332377, "complete-iteration": 0.2291058467713413, "set_robot_commands": 0.0023509050836626267, "distance-from-start": 1.5984998154929746, "deviation-center-line": 0.5705091390843485, "driven_lanedir_consec": 3.0015747541101776, "sim_compute_sim_state": 0.01076382990704467, "sim_compute_performance-ego0": 0.0019510487057515329}}
set_robot_commands_max0.0023509050836626267
set_robot_commands_mean0.0023509050836626267
set_robot_commands_median0.0023509050836626267
set_robot_commands_min0.0023509050836626267
sim_compute_performance-ego0_max0.0019510487057515329
sim_compute_performance-ego0_mean0.0019510487057515329
sim_compute_performance-ego0_median0.0019510487057515329
sim_compute_performance-ego0_min0.0019510487057515329
sim_compute_sim_state_max0.01076382990704467
sim_compute_sim_state_mean0.01076382990704467
sim_compute_sim_state_median0.01076382990704467
sim_compute_sim_state_min0.01076382990704467
sim_render-ego0_max0.003937774936094979
sim_render-ego0_mean0.003937774936094979
sim_render-ego0_median0.003937774936094979
sim_render-ego0_min0.003937774936094979
simulation-passed1
step_physics_max0.13239677536566527
step_physics_mean0.13239677536566527
step_physics_median0.13239677536566527
step_physics_min0.13239677536566527
survival_time_max7.499999999999981
survival_time_mean7.499999999999981
survival_time_min7.499999999999981
No reset possible
7489213568MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-sim-testingsim-3of4successnogpu-production-spot-0-010:01:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7488813568MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-sim-testingsim-3of4successnogpu-production-spot-0-010:01:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7488513568MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-sim-testingsim-3of4successnogpu-production-spot-0-010:01:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7488013569MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFI-sim-validationsim-3of4successnogpu-production-spot-0-010:01:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7486813571MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-010:01:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.6499999999999986
in-drivable-lane_median1.0000000000000009
driven_lanedir_consec_median0.24093526109836816
deviation-center-line_median0.10809470966167424


other stats
agent_compute-ego0_max0.047233744903847026
agent_compute-ego0_mean0.047233744903847026
agent_compute-ego0_median0.047233744903847026
agent_compute-ego0_min0.047233744903847026
complete-iteration_max0.2226915403648659
complete-iteration_mean0.2226915403648659
complete-iteration_median0.2226915403648659
complete-iteration_min0.2226915403648659
deviation-center-line_max0.10809470966167424
deviation-center-line_mean0.10809470966167424
deviation-center-line_min0.10809470966167424
deviation-heading_max0.9657426727971758
deviation-heading_mean0.9657426727971758
deviation-heading_median0.9657426727971758
deviation-heading_min0.9657426727971758
distance-from-start_max0.8443933385751533
distance-from-start_mean0.8443933385751533
distance-from-start_median0.8443933385751533
distance-from-start_min0.8443933385751533
driven_any_max0.8602204885992286
driven_any_mean0.8602204885992286
driven_any_median0.8602204885992286
driven_any_min0.8602204885992286
driven_lanedir_consec_max0.24093526109836816
driven_lanedir_consec_mean0.24093526109836816
driven_lanedir_consec_min0.24093526109836816
driven_lanedir_max0.24093526109836816
driven_lanedir_mean0.24093526109836816
driven_lanedir_median0.24093526109836816
driven_lanedir_min0.24093526109836816
get_duckie_state_max0.021934875735530147
get_duckie_state_mean0.021934875735530147
get_duckie_state_median0.021934875735530147
get_duckie_state_min0.021934875735530147
get_robot_state_max0.003906364794130679
get_robot_state_mean0.003906364794130679
get_robot_state_median0.003906364794130679
get_robot_state_min0.003906364794130679
get_state_dump_max0.008238364149022985
get_state_dump_mean0.008238364149022985
get_state_dump_median0.008238364149022985
get_state_dump_min0.008238364149022985
get_ui_image_max0.024692566306502732
get_ui_image_mean0.024692566306502732
get_ui_image_median0.024692566306502732
get_ui_image_min0.024692566306502732
in-drivable-lane_max1.0000000000000009
in-drivable-lane_mean1.0000000000000009
in-drivable-lane_min1.0000000000000009
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 0.8602204885992286, "get_ui_image": 0.024692566306502732, "step_physics": 0.0967164878492002, "survival_time": 2.6499999999999986, "driven_lanedir": 0.24093526109836816, "get_state_dump": 0.008238364149022985, "get_robot_state": 0.003906364794130679, "sim_render-ego0": 0.004138937702885381, "get_duckie_state": 0.021934875735530147, "in-drivable-lane": 1.0000000000000009, "deviation-heading": 0.9657426727971758, "agent_compute-ego0": 0.047233744903847026, "complete-iteration": 0.2226915403648659, "set_robot_commands": 0.002402694137008102, "distance-from-start": 0.8443933385751533, "deviation-center-line": 0.10809470966167424, "driven_lanedir_consec": 0.24093526109836816, "sim_compute_sim_state": 0.0112349236453021, "sim_compute_performance-ego0": 0.0020886306409482604}}
set_robot_commands_max0.002402694137008102
set_robot_commands_mean0.002402694137008102
set_robot_commands_median0.002402694137008102
set_robot_commands_min0.002402694137008102
sim_compute_performance-ego0_max0.0020886306409482604
sim_compute_performance-ego0_mean0.0020886306409482604
sim_compute_performance-ego0_median0.0020886306409482604
sim_compute_performance-ego0_min0.0020886306409482604
sim_compute_sim_state_max0.0112349236453021
sim_compute_sim_state_mean0.0112349236453021
sim_compute_sim_state_median0.0112349236453021
sim_compute_sim_state_min0.0112349236453021
sim_render-ego0_max0.004138937702885381
sim_render-ego0_mean0.004138937702885381
sim_render-ego0_median0.004138937702885381
sim_render-ego0_min0.004138937702885381
simulation-passed1
step_physics_max0.0967164878492002
step_physics_mean0.0967164878492002
step_physics_median0.0967164878492002
step_physics_min0.0967164878492002
survival_time_max2.6499999999999986
survival_time_mean2.6499999999999986
survival_time_min2.6499999999999986
No reset possible
7476413577MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV_multi-sim-testing427successyesgpu-production-spot-0-010:23:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median16.400000000000098
in-drivable-lane_median0.12500000000000178
driven_lanedir_consec_median5.899559807293755
deviation-center-line_median1.3615799221118956


other stats
agent_compute-ego0_max0.04609651695993534
agent_compute-ego0_mean0.04569267113220293
agent_compute-ego0_median0.04609651695993534
agent_compute-ego0_min0.044884979476738134
agent_compute-ego1_max0.04658631858129994
agent_compute-ego1_mean0.046051635039088344
agent_compute-ego1_median0.04658631858129994
agent_compute-ego1_min0.04498226795466516
agent_compute-ego2_max0.046586532360876946
agent_compute-ego2_mean0.046586532360876946
agent_compute-ego2_median0.046586532360876946
agent_compute-ego2_min0.046586532360876946
agent_compute-ego3_max0.04676198959350586
agent_compute-ego3_mean0.04676198959350586
agent_compute-ego3_median0.04676198959350586
agent_compute-ego3_min0.04676198959350586
complete-iteration_max0.6379017373348804
complete-iteration_mean0.5331898377494585
complete-iteration_median0.6379017373348804
complete-iteration_min0.3237660385786147
deviation-center-line_max2.629834058087942
deviation-center-line_mean1.5171566280012396
deviation-center-line_min0.3274858477044686
deviation-heading_max10.124598636724365
deviation-heading_mean5.748647147758239
deviation-heading_median5.4333116241261665
deviation-heading_min1.7519966377849592
distance-from-start_max3.654813754229599
distance-from-start_mean1.964231170235441
distance-from-start_median1.9237236911242532
distance-from-start_min1.072169036046019
driven_any_max28.30621520914095
driven_any_mean12.407736874192103
driven_any_median6.467608135046186
driven_any_min2.456893503870476
driven_lanedir_consec_max27.91092836085831
driven_lanedir_consec_mean11.997093576847131
driven_lanedir_consec_min1.997341113368025
driven_lanedir_max27.91092836085831
driven_lanedir_mean11.997093576847131
driven_lanedir_median5.899559807293755
driven_lanedir_min1.997341113368025
get_duckie_state_max1.964598078858164e-06
get_duckie_state_mean1.8554545121476256e-06
get_duckie_state_median1.964598078858164e-06
get_duckie_state_min1.6371673787265495e-06
get_robot_state_max0.015040397644042969
get_robot_state_mean0.012558933631797447
get_robot_state_median0.015040397644042969
get_robot_state_min0.007596005607306411
get_state_dump_max0.010128532861866124
get_state_dump_mean0.008957809350027298
get_state_dump_median0.010128532861866124
get_state_dump_min0.006616362326349644
get_ui_image_max0.02401763110175321
get_ui_image_mean0.02219108296299988
get_ui_image_median0.02401763110175321
get_ui_image_min0.01853798668549321
in-drivable-lane_max11.400000000000109
in-drivable-lane_mean2.108333333333354
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-techtrack-000-ego0": {"driven_any": 2.456893503870476, "get_ui_image": 0.02401763110175321, "step_physics": 0.3416453125266681, "survival_time": 16.400000000000098, "driven_lanedir": 1.997341113368025, "get_state_dump": 0.010128532861866124, "get_robot_state": 0.015040397644042969, "sim_render-ego0": 0.004131679476937987, "sim_render-ego1": 0.004076818564742532, "sim_render-ego2": 0.004047003922853789, "sim_render-ego3": 0.004142205403568535, "get_duckie_state": 1.964598078858164e-06, "in-drivable-lane": 11.400000000000109, "deviation-heading": 1.7519966377849592, "agent_compute-ego0": 0.04609651695993534, "agent_compute-ego1": 0.04658631858129994, "agent_compute-ego2": 0.046586532360876946, "agent_compute-ego3": 0.04676198959350586, "complete-iteration": 0.6379017373348804, "set_robot_commands": 0.0025146485824353066, "distance-from-start": 1.988781261153932, "deviation-center-line": 0.3274858477044686, "driven_lanedir_consec": 1.997341113368025, "sim_compute_sim_state": 0.025918077915272817, "sim_compute_performance-ego0": 0.0021405778032668092, "sim_compute_performance-ego1": 0.0021500188891286185, "sim_compute_performance-ego2": 0.002119844807679892, "sim_compute_performance-ego3": 0.0020838602697957976}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 5.268853200770787, "get_ui_image": 0.02401763110175321, "step_physics": 0.3416453125266681, "survival_time": 16.400000000000098, "driven_lanedir": 4.486332677851148, "get_state_dump": 0.010128532861866124, "get_robot_state": 0.015040397644042969, "sim_render-ego0": 0.004131679476937987, "sim_render-ego1": 0.004076818564742532, "sim_render-ego2": 0.004047003922853789, "sim_render-ego3": 0.004142205403568535, "get_duckie_state": 1.964598078858164e-06, "in-drivable-lane": 1.0000000000000089, "deviation-heading": 7.8212439305937345, "agent_compute-ego0": 0.04609651695993534, "agent_compute-ego1": 0.04658631858129994, "agent_compute-ego2": 0.046586532360876946, "agent_compute-ego3": 0.04676198959350586, "complete-iteration": 0.6379017373348804, "set_robot_commands": 0.0025146485824353066, "distance-from-start": 2.132562261241059, "deviation-center-line": 0.976176411856675, "driven_lanedir_consec": 4.486332677851148, "sim_compute_sim_state": 0.025918077915272817, "sim_compute_performance-ego0": 0.0021405778032668092, "sim_compute_performance-ego1": 0.0021500188891286185, "sim_compute_performance-ego2": 0.002119844807679892, "sim_compute_performance-ego3": 0.0020838602697957976}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 7.666363069321585, "get_ui_image": 0.02401763110175321, "step_physics": 0.3416453125266681, "survival_time": 16.400000000000098, "driven_lanedir": 7.312786936736362, "get_state_dump": 0.010128532861866124, "get_robot_state": 0.015040397644042969, "sim_render-ego0": 0.004131679476937987, "sim_render-ego1": 0.004076818564742532, "sim_render-ego2": 0.004047003922853789, "sim_render-ego3": 0.004142205403568535, "get_duckie_state": 1.964598078858164e-06, "in-drivable-lane": 0.25000000000000355, "deviation-heading": 3.0453793176585986, "agent_compute-ego0": 0.04609651695993534, "agent_compute-ego1": 0.04658631858129994, "agent_compute-ego2": 0.046586532360876946, "agent_compute-ego3": 0.04676198959350586, "complete-iteration": 0.6379017373348804, "set_robot_commands": 0.0025146485824353066, "distance-from-start": 3.654813754229599, "deviation-center-line": 1.153141972814827, "driven_lanedir_consec": 7.312786936736362, "sim_compute_sim_state": 0.025918077915272817, "sim_compute_performance-ego0": 0.0021405778032668092, "sim_compute_performance-ego1": 0.0021500188891286185, "sim_compute_performance-ego2": 0.002119844807679892, "sim_compute_performance-ego3": 0.0020838602697957976}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 2.508149784350898, "get_ui_image": 0.02401763110175321, "step_physics": 0.3416453125266681, "survival_time": 16.400000000000098, "driven_lanedir": 2.4623317350082936, "get_state_dump": 0.010128532861866124, "get_robot_state": 0.015040397644042969, "sim_render-ego0": 0.004131679476937987, "sim_render-ego1": 0.004076818564742532, "sim_render-ego2": 0.004047003922853789, "sim_render-ego3": 0.004142205403568535, "get_duckie_state": 1.964598078858164e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.8172225478430823, "agent_compute-ego0": 0.04609651695993534, "agent_compute-ego1": 0.04658631858129994, "agent_compute-ego2": 0.046586532360876946, "agent_compute-ego3": 0.04676198959350586, "complete-iteration": 0.6379017373348804, "set_robot_commands": 0.0025146485824353066, "distance-from-start": 1.858666121094574, "deviation-center-line": 1.5700178714089637, "driven_lanedir_consec": 2.4623317350082936, "sim_compute_sim_state": 0.025918077915272817, "sim_compute_performance-ego0": 0.0021405778032668092, "sim_compute_performance-ego1": 0.0021500188891286185, "sim_compute_performance-ego2": 0.002119844807679892, "sim_compute_performance-ego3": 0.0020838602697957976}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 28.239946477697934, "get_ui_image": 0.01853798668549321, "step_physics": 0.17491729015315402, "survival_time": 59.99999999999873, "driven_lanedir": 27.81284063726066, "get_state_dump": 0.006616362326349644, "get_robot_state": 0.007596005607306411, "sim_render-ego0": 0.0039392667050961155, "sim_render-ego1": 0.003966425181824797, "get_duckie_state": 1.6371673787265495e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.124598636724365, "agent_compute-ego0": 0.044884979476738134, "agent_compute-ego1": 0.04498226795466516, "complete-iteration": 0.3237660385786147, "set_robot_commands": 0.0024141117893190407, "distance-from-start": 1.072169036046019, "deviation-center-line": 2.629834058087942, "driven_lanedir_consec": 27.81284063726066, "sim_compute_sim_state": 0.009238161909689416, "sim_compute_performance-ego0": 0.0020599069444464206, "sim_compute_performance-ego1": 0.0020687079846511574}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 28.30621520914095, "get_ui_image": 0.01853798668549321, "step_physics": 0.17491729015315402, "survival_time": 59.99999999999873, "driven_lanedir": 27.91092836085831, "get_state_dump": 0.006616362326349644, "get_robot_state": 0.007596005607306411, "sim_render-ego0": 0.0039392667050961155, "sim_render-ego1": 0.003966425181824797, "get_duckie_state": 1.6371673787265495e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.931441815944694, "agent_compute-ego0": 0.044884979476738134, "agent_compute-ego1": 0.04498226795466516, "complete-iteration": 0.3237660385786147, "set_robot_commands": 0.0024141117893190407, "distance-from-start": 1.0783945876474623, "deviation-center-line": 2.4462836061345627, "driven_lanedir_consec": 27.91092836085831, "sim_compute_sim_state": 0.009238161909689416, "sim_compute_performance-ego0": 0.0020599069444464206, "sim_compute_performance-ego1": 0.0020687079846511574}}
set_robot_commands_max0.0025146485824353066
set_robot_commands_mean0.002481136318063218
set_robot_commands_median0.0025146485824353066
set_robot_commands_min0.0024141117893190407
sim_compute_performance-ego0_max0.0021405778032668092
sim_compute_performance-ego0_mean0.0021136875169933465
sim_compute_performance-ego0_median0.0021405778032668092
sim_compute_performance-ego0_min0.0020599069444464206
sim_compute_performance-ego1_max0.0021500188891286185
sim_compute_performance-ego1_mean0.002122915254302798
sim_compute_performance-ego1_median0.0021500188891286185
sim_compute_performance-ego1_min0.0020687079846511574
sim_compute_performance-ego2_max0.002119844807679892
sim_compute_performance-ego2_mean0.002119844807679892
sim_compute_performance-ego2_median0.002119844807679892
sim_compute_performance-ego2_min0.002119844807679892
sim_compute_performance-ego3_max0.0020838602697957976
sim_compute_performance-ego3_mean0.0020838602697957976
sim_compute_performance-ego3_median0.0020838602697957976
sim_compute_performance-ego3_min0.0020838602697957976
sim_compute_sim_state_max0.025918077915272817
sim_compute_sim_state_mean0.02035810591341168
sim_compute_sim_state_median0.025918077915272817
sim_compute_sim_state_min0.009238161909689416
sim_render-ego0_max0.004131679476937987
sim_render-ego0_mean0.00406754188632403
sim_render-ego0_median0.004131679476937987
sim_render-ego0_min0.0039392667050961155
sim_render-ego1_max0.004076818564742532
sim_render-ego1_mean0.00404002077043662
sim_render-ego1_median0.004076818564742532
sim_render-ego1_min0.003966425181824797
sim_render-ego2_max0.004047003922853789
sim_render-ego2_mean0.004047003922853789
sim_render-ego2_median0.004047003922853789
sim_render-ego2_min0.004047003922853789
sim_render-ego3_max0.004142205403568535
sim_render-ego3_mean0.004142205403568535
sim_render-ego3_median0.004142205403568535
sim_render-ego3_min0.004142205403568535
simulation-passed1
step_physics_max0.3416453125266681
step_physics_mean0.28606930506883005
step_physics_median0.3416453125266681
step_physics_min0.17491729015315402
survival_time_max59.99999999999873
survival_time_mean30.93333333333297
survival_time_min16.400000000000098
No reset possible
7475313581Andras Beres202-1aido-LFI-full-sim-testingsim-2of4successnogpu-production-spot-0-010:01:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7473113581Andras Beres202-1aido-LFI-full-sim-testingsim-0of4successnogpu-production-spot-0-010:04:18
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median17.800000000000118
in-drivable-lane_median1.6000000000000156
driven_lanedir_consec_median5.976161286172536
deviation-center-line_median1.178509437848914


other stats
agent_compute-ego0_max0.022290650536032283
agent_compute-ego0_mean0.022290650536032283
agent_compute-ego0_median0.022290650536032283
agent_compute-ego0_min0.022290650536032283
complete-iteration_max0.2458497843488592
complete-iteration_mean0.2458497843488592
complete-iteration_median0.2458497843488592
complete-iteration_min0.2458497843488592
deviation-center-line_max1.178509437848914
deviation-center-line_mean1.178509437848914
deviation-center-line_min1.178509437848914
deviation-heading_max3.132357291489315
deviation-heading_mean3.132357291489315
deviation-heading_median3.132357291489315
deviation-heading_min3.132357291489315
distance-from-start_max1.3277663156017658
distance-from-start_mean1.3277663156017658
distance-from-start_median1.3277663156017658
distance-from-start_min1.3277663156017658
driven_any_max7.911053703167652
driven_any_mean7.911053703167652
driven_any_median7.911053703167652
driven_any_min7.911053703167652
driven_lanedir_consec_max5.976161286172536
driven_lanedir_consec_mean5.976161286172536
driven_lanedir_consec_min5.976161286172536
driven_lanedir_max7.322172130108806
driven_lanedir_mean7.322172130108806
driven_lanedir_median7.322172130108806
driven_lanedir_min7.322172130108806
get_duckie_state_max1.5460476487958465e-06
get_duckie_state_mean1.5460476487958465e-06
get_duckie_state_median1.5460476487958465e-06
get_duckie_state_min1.5460476487958465e-06
get_robot_state_max0.004080722979804715
get_robot_state_mean0.004080722979804715
get_robot_state_median0.004080722979804715
get_robot_state_min0.004080722979804715
get_state_dump_max0.0051475993725432065
get_state_dump_mean0.0051475993725432065
get_state_dump_median0.0051475993725432065
get_state_dump_min0.0051475993725432065
get_ui_image_max0.027240233594963865
get_ui_image_mean0.027240233594963865
get_ui_image_median0.027240233594963865
get_ui_image_min0.027240233594963865
in-drivable-lane_max1.6000000000000156
in-drivable-lane_mean1.6000000000000156
in-drivable-lane_min1.6000000000000156
per-episodes
details{"LFI-full-4way-000-ego0": {"driven_any": 7.911053703167652, "get_ui_image": 0.027240233594963865, "step_physics": 0.16695080685014485, "survival_time": 17.800000000000118, "driven_lanedir": 7.322172130108806, "get_state_dump": 0.0051475993725432065, "get_robot_state": 0.004080722979804715, "sim_render-ego0": 0.004256201725380093, "get_duckie_state": 1.5460476487958465e-06, "in-drivable-lane": 1.6000000000000156, "deviation-heading": 3.132357291489315, "agent_compute-ego0": 0.022290650536032283, "complete-iteration": 0.2458497843488592, "set_robot_commands": 0.0025964037043039873, "distance-from-start": 1.3277663156017658, "deviation-center-line": 1.178509437848914, "driven_lanedir_consec": 5.976161286172536, "sim_compute_sim_state": 0.010943186383287446, "sim_compute_performance-ego0": 0.0022346399077514305}}
set_robot_commands_max0.0025964037043039873
set_robot_commands_mean0.0025964037043039873
set_robot_commands_median0.0025964037043039873
set_robot_commands_min0.0025964037043039873
sim_compute_performance-ego0_max0.0022346399077514305
sim_compute_performance-ego0_mean0.0022346399077514305
sim_compute_performance-ego0_median0.0022346399077514305
sim_compute_performance-ego0_min0.0022346399077514305
sim_compute_sim_state_max0.010943186383287446
sim_compute_sim_state_mean0.010943186383287446
sim_compute_sim_state_median0.010943186383287446
sim_compute_sim_state_min0.010943186383287446
sim_render-ego0_max0.004256201725380093
sim_render-ego0_mean0.004256201725380093
sim_render-ego0_median0.004256201725380093
sim_render-ego0_min0.004256201725380093
simulation-passed1
step_physics_max0.16695080685014485
step_physics_mean0.16695080685014485
step_physics_median0.16695080685014485
step_physics_min0.16695080685014485
survival_time_max17.800000000000118
survival_time_mean17.800000000000118
survival_time_min17.800000000000118
No reset possible
7468213583Andras Beres202-1aido-LFI-sim-testingsim-1of4successnogpu-production-spot-0-010:08:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median41.749999999999766
in-drivable-lane_median5.649999999999948
driven_lanedir_consec_median16.378758888002917
deviation-center-line_median2.6990193101288926


other stats
agent_compute-ego0_max0.022413478798843457
agent_compute-ego0_mean0.022413478798843457
agent_compute-ego0_median0.022413478798843457
agent_compute-ego0_min0.022413478798843457
complete-iteration_max0.2477954557637849
complete-iteration_mean0.2477954557637849
complete-iteration_median0.2477954557637849
complete-iteration_min0.2477954557637849
deviation-center-line_max2.6990193101288926
deviation-center-line_mean2.6990193101288926
deviation-center-line_min2.6990193101288926
deviation-heading_max6.778408713362349
deviation-heading_mean6.778408713362349
deviation-heading_median6.778408713362349
deviation-heading_min6.778408713362349
distance-from-start_max2.215000575661974
distance-from-start_mean2.215000575661974
distance-from-start_median2.215000575661974
distance-from-start_min2.215000575661974
driven_any_max19.363454694148388
driven_any_mean19.363454694148388
driven_any_median19.363454694148388
driven_any_min19.363454694148388
driven_lanedir_consec_max16.378758888002917
driven_lanedir_consec_mean16.378758888002917
driven_lanedir_consec_min16.378758888002917
driven_lanedir_max16.826103878536323
driven_lanedir_mean16.826103878536323
driven_lanedir_median16.826103878536323
driven_lanedir_min16.826103878536323
get_duckie_state_max1.5987733904824872e-06
get_duckie_state_mean1.5987733904824872e-06
get_duckie_state_median1.5987733904824872e-06
get_duckie_state_min1.5987733904824872e-06
get_robot_state_max0.0042124459047636915
get_robot_state_mean0.0042124459047636915
get_robot_state_median0.0042124459047636915
get_robot_state_min0.0042124459047636915
get_state_dump_max0.005164289303373492
get_state_dump_mean0.005164289303373492
get_state_dump_median0.005164289303373492
get_state_dump_min0.005164289303373492
get_ui_image_max0.027532084706867712
get_ui_image_mean0.027532084706867712
get_ui_image_median0.027532084706867712
get_ui_image_min0.027532084706867712
in-drivable-lane_max5.649999999999948
in-drivable-lane_mean5.649999999999948
in-drivable-lane_min5.649999999999948
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 19.363454694148388, "get_ui_image": 0.027532084706867712, "step_physics": 0.16614463597393492, "survival_time": 41.749999999999766, "driven_lanedir": 16.826103878536323, "get_state_dump": 0.005164289303373492, "get_robot_state": 0.0042124459047636915, "sim_render-ego0": 0.0043835893772435535, "get_duckie_state": 1.5987733904824872e-06, "in-drivable-lane": 5.649999999999948, "deviation-heading": 6.778408713362349, "agent_compute-ego0": 0.022413478798843457, "complete-iteration": 0.2477954557637849, "set_robot_commands": 0.0026596815962540475, "distance-from-start": 2.215000575661974, "deviation-center-line": 2.6990193101288926, "driven_lanedir_consec": 16.378758888002917, "sim_compute_sim_state": 0.01289191171883396, "sim_compute_performance-ego0": 0.002290705174350282}}
set_robot_commands_max0.0026596815962540475
set_robot_commands_mean0.0026596815962540475
set_robot_commands_median0.0026596815962540475
set_robot_commands_min0.0026596815962540475
sim_compute_performance-ego0_max0.002290705174350282
sim_compute_performance-ego0_mean0.002290705174350282
sim_compute_performance-ego0_median0.002290705174350282
sim_compute_performance-ego0_min0.002290705174350282
sim_compute_sim_state_max0.01289191171883396
sim_compute_sim_state_mean0.01289191171883396
sim_compute_sim_state_median0.01289191171883396
sim_compute_sim_state_min0.01289191171883396
sim_render-ego0_max0.0043835893772435535
sim_render-ego0_mean0.0043835893772435535
sim_render-ego0_median0.0043835893772435535
sim_render-ego0_min0.0043835893772435535
simulation-passed1
step_physics_max0.16614463597393492
step_physics_mean0.16614463597393492
step_physics_median0.16614463597393492
step_physics_min0.16614463597393492
survival_time_max41.749999999999766
survival_time_mean41.749999999999766
survival_time_min41.749999999999766
No reset possible
7467213589Andras Beres202-1aido-LFVI-sim-testingsim-2of4successnogpu-production-spot-0-010:01:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7466613589Andras Beres202-1aido-LFVI-sim-testingsim-2of4successnogpu-production-spot-0-010:01:03
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7462013584Andras Beres202-1aido-LFI-sim-validationsim-0of4successnogpu-production-spot-0-010:08:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median51.799999999999194
in-drivable-lane_median2.499999999999974
driven_lanedir_consec_median16.34016147290914
deviation-center-line_median3.9567454706830993


other stats
agent_compute-ego0_max0.01911311761162665
agent_compute-ego0_mean0.01911311761162665
agent_compute-ego0_median0.01911311761162665
agent_compute-ego0_min0.01911311761162665
complete-iteration_max0.2142697927921924
complete-iteration_mean0.2142697927921924
complete-iteration_median0.2142697927921924
complete-iteration_min0.2142697927921924
deviation-center-line_max3.9567454706830993
deviation-center-line_mean3.9567454706830993
deviation-center-line_min3.9567454706830993
deviation-heading_max8.799471580884262
deviation-heading_mean8.799471580884262
deviation-heading_median8.799471580884262
deviation-heading_min8.799471580884262
distance-from-start_max1.8607811176157103
distance-from-start_mean1.8607811176157103
distance-from-start_median1.8607811176157103
distance-from-start_min1.8607811176157103
driven_any_max24.19212124110183
driven_any_mean24.19212124110183
driven_any_median24.19212124110183
driven_any_min24.19212124110183
driven_lanedir_consec_max16.34016147290914
driven_lanedir_consec_mean16.34016147290914
driven_lanedir_consec_min16.34016147290914
driven_lanedir_max22.56502573166116
driven_lanedir_mean22.56502573166116
driven_lanedir_median22.56502573166116
driven_lanedir_min22.56502573166116
get_duckie_state_max1.2573878583475849e-06
get_duckie_state_mean1.2573878583475849e-06
get_duckie_state_median1.2573878583475849e-06
get_duckie_state_min1.2573878583475849e-06
get_robot_state_max0.003647100730020492
get_robot_state_mean0.003647100730020492
get_robot_state_median0.003647100730020492
get_robot_state_min0.003647100730020492
get_state_dump_max0.004538340186888825
get_state_dump_mean0.004538340186888825
get_state_dump_median0.004538340186888825
get_state_dump_min0.004538340186888825
get_ui_image_max0.024451986917994176
get_ui_image_mean0.024451986917994176
get_ui_image_median0.024451986917994176
get_ui_image_min0.024451986917994176
in-drivable-lane_max2.499999999999974
in-drivable-lane_mean2.499999999999974
in-drivable-lane_min2.499999999999974
per-episodes
details{"LFI-norm-4way-000-ego0": {"driven_any": 24.19212124110183, "get_ui_image": 0.024451986917994176, "step_physics": 0.1432760789516232, "survival_time": 51.799999999999194, "driven_lanedir": 22.56502573166116, "get_state_dump": 0.004538340186888825, "get_robot_state": 0.003647100730020492, "sim_render-ego0": 0.0038587366488538576, "get_duckie_state": 1.2573878583475849e-06, "in-drivable-lane": 2.499999999999974, "deviation-heading": 8.799471580884262, "agent_compute-ego0": 0.01911311761162665, "complete-iteration": 0.2142697927921924, "set_robot_commands": 0.002319415027173299, "distance-from-start": 1.8607811176157103, "deviation-center-line": 3.9567454706830993, "driven_lanedir_consec": 16.34016147290914, "sim_compute_sim_state": 0.010991909924675436, "sim_compute_performance-ego0": 0.001984370249101718}}
set_robot_commands_max0.002319415027173299
set_robot_commands_mean0.002319415027173299
set_robot_commands_median0.002319415027173299
set_robot_commands_min0.002319415027173299
sim_compute_performance-ego0_max0.001984370249101718
sim_compute_performance-ego0_mean0.001984370249101718
sim_compute_performance-ego0_median0.001984370249101718
sim_compute_performance-ego0_min0.001984370249101718
sim_compute_sim_state_max0.010991909924675436
sim_compute_sim_state_mean0.010991909924675436
sim_compute_sim_state_median0.010991909924675436
sim_compute_sim_state_min0.010991909924675436
sim_render-ego0_max0.0038587366488538576
sim_render-ego0_mean0.0038587366488538576
sim_render-ego0_median0.0038587366488538576
sim_render-ego0_min0.0038587366488538576
simulation-passed1
step_physics_max0.1432760789516232
step_physics_mean0.1432760789516232
step_physics_median0.1432760789516232
step_physics_min0.1432760789516232
survival_time_max51.799999999999194
survival_time_mean51.799999999999194
survival_time_min51.799999999999194
No reset possible
7461113584Andras Beres202-1aido-LFI-sim-validationsim-2of4successnogpu-production-spot-0-010:01:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7460613584Andras Beres202-1aido-LFI-sim-validationsim-3of4successnogpu-production-spot-0-010:01:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7460013584Andras Beres202-1aido-LFI-sim-validationsim-3of4successnogpu-production-spot-0-010:01:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7459513584Andras Beres202-1aido-LFI-sim-validationsim-3of4successnogpu-production-spot-0-010:01:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7458113588Andras Beres202-1aido-LFV-sim-testingsim-3of4successnogpu-production-spot-0-010:02:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.349999999999996
in-drivable-lane_median0.0
driven_lanedir_consec_median1.1752526240451495
deviation-center-line_median0.3583743856943992


other stats
agent_compute-ego0_max0.018939449506647447
agent_compute-ego0_mean0.018939449506647447
agent_compute-ego0_median0.018939449506647447
agent_compute-ego0_min0.018939449506647447
agent_compute-npc0_max0.05595450892167933
agent_compute-npc0_mean0.05595450892167933
agent_compute-npc0_median0.05595450892167933
agent_compute-npc0_min0.05595450892167933
agent_compute-npc1_max0.056402812985812914
agent_compute-npc1_mean0.056402812985812914
agent_compute-npc1_median0.056402812985812914
agent_compute-npc1_min0.056402812985812914
agent_compute-npc2_max0.055905545459074134
agent_compute-npc2_mean0.055905545459074134
agent_compute-npc2_median0.055905545459074134
agent_compute-npc2_min0.055905545459074134
agent_compute-npc3_max0.05686336405137006
agent_compute-npc3_mean0.05686336405137006
agent_compute-npc3_median0.05686336405137006
agent_compute-npc3_min0.05686336405137006
complete-iteration_max0.6809112710111281
complete-iteration_mean0.6809112710111281
complete-iteration_median0.6809112710111281
complete-iteration_min0.6809112710111281
deviation-center-line_max0.3583743856943992
deviation-center-line_mean0.3583743856943992
deviation-center-line_min0.3583743856943992
deviation-heading_max0.3390976523155761
deviation-heading_mean0.3390976523155761
deviation-heading_median0.3390976523155761
deviation-heading_min0.3390976523155761
distance-from-start_max0.955915857874604
distance-from-start_mean0.955915857874604
distance-from-start_median0.955915857874604
distance-from-start_min0.955915857874604
driven_any_max1.185102617216926
driven_any_mean1.185102617216926
driven_any_median1.185102617216926
driven_any_min1.185102617216926
driven_lanedir_consec_max1.1752526240451495
driven_lanedir_consec_mean1.1752526240451495
driven_lanedir_consec_min1.1752526240451495
driven_lanedir_max1.1752526240451495
driven_lanedir_mean1.1752526240451495
driven_lanedir_median1.1752526240451495
driven_lanedir_min1.1752526240451495
get_duckie_state_max1.914360943962546e-06
get_duckie_state_mean1.914360943962546e-06
get_duckie_state_median1.914360943962546e-06
get_duckie_state_min1.914360943962546e-06
get_robot_state_max0.021283286459305707
get_robot_state_mean0.021283286459305707
get_robot_state_median0.021283286459305707
get_robot_state_min0.021283286459305707
get_state_dump_max0.013644853058983298
get_state_dump_mean0.013644853058983298
get_state_dump_median0.013644853058983298
get_state_dump_min0.013644853058983298
get_ui_image_max0.02760901871849509
get_ui_image_mean0.02760901871849509
get_ui_image_median0.02760901871849509
get_ui_image_min0.02760901871849509
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 1.185102617216926, "get_ui_image": 0.02760901871849509, "step_physics": 0.277405321598053, "survival_time": 3.349999999999996, "driven_lanedir": 1.1752526240451495, "get_state_dump": 0.013644853058983298, "get_robot_state": 0.021283286459305707, "sim_render-ego0": 0.004499950829674216, "sim_render-npc0": 0.004406659042134005, "sim_render-npc1": 0.004562637385200052, "sim_render-npc2": 0.004778805901022518, "sim_render-npc3": 0.004677800571217257, "get_duckie_state": 1.914360943962546e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.3390976523155761, "agent_compute-ego0": 0.018939449506647447, "agent_compute-npc0": 0.05595450892167933, "agent_compute-npc1": 0.056402812985812914, "agent_compute-npc2": 0.055905545459074134, "agent_compute-npc3": 0.05686336405137006, "complete-iteration": 0.6809112710111281, "set_robot_commands": 0.0029594652792986702, "distance-from-start": 0.955915857874604, "deviation-center-line": 0.3583743856943992, "driven_lanedir_consec": 1.1752526240451495, "sim_compute_sim_state": 0.04712440336451811, "sim_compute_performance-ego0": 0.002504236557904412, "sim_compute_performance-npc0": 0.0024516933104571175, "sim_compute_performance-npc1": 0.0023580263642703787, "sim_compute_performance-npc2": 0.0025457178845125087, "sim_compute_performance-npc3": 0.0023149637614979465}}
set_robot_commands_max0.0029594652792986702
set_robot_commands_mean0.0029594652792986702
set_robot_commands_median0.0029594652792986702
set_robot_commands_min0.0029594652792986702
sim_compute_performance-ego0_max0.002504236557904412
sim_compute_performance-ego0_mean0.002504236557904412
sim_compute_performance-ego0_median0.002504236557904412
sim_compute_performance-ego0_min0.002504236557904412
sim_compute_performance-npc0_max0.0024516933104571175
sim_compute_performance-npc0_mean0.0024516933104571175
sim_compute_performance-npc0_median0.0024516933104571175
sim_compute_performance-npc0_min0.0024516933104571175
sim_compute_performance-npc1_max0.0023580263642703787
sim_compute_performance-npc1_mean0.0023580263642703787
sim_compute_performance-npc1_median0.0023580263642703787
sim_compute_performance-npc1_min0.0023580263642703787
sim_compute_performance-npc2_max0.0025457178845125087
sim_compute_performance-npc2_mean0.0025457178845125087
sim_compute_performance-npc2_median0.0025457178845125087
sim_compute_performance-npc2_min0.0025457178845125087
sim_compute_performance-npc3_max0.0023149637614979465
sim_compute_performance-npc3_mean0.0023149637614979465
sim_compute_performance-npc3_median0.0023149637614979465
sim_compute_performance-npc3_min0.0023149637614979465
sim_compute_sim_state_max0.04712440336451811
sim_compute_sim_state_mean0.04712440336451811
sim_compute_sim_state_median0.04712440336451811
sim_compute_sim_state_min0.04712440336451811
sim_render-ego0_max0.004499950829674216
sim_render-ego0_mean0.004499950829674216
sim_render-ego0_median0.004499950829674216
sim_render-ego0_min0.004499950829674216
sim_render-npc0_max0.004406659042134005
sim_render-npc0_mean0.004406659042134005
sim_render-npc0_median0.004406659042134005
sim_render-npc0_min0.004406659042134005
sim_render-npc1_max0.004562637385200052
sim_render-npc1_mean0.004562637385200052
sim_render-npc1_median0.004562637385200052
sim_render-npc1_min0.004562637385200052
sim_render-npc2_max0.004778805901022518
sim_render-npc2_mean0.004778805901022518
sim_render-npc2_median0.004778805901022518
sim_render-npc2_min0.004778805901022518
sim_render-npc3_max0.004677800571217257
sim_render-npc3_mean0.004677800571217257
sim_render-npc3_median0.004677800571217257
sim_render-npc3_min0.004677800571217257
simulation-passed1
step_physics_max0.277405321598053
step_physics_mean0.277405321598053
step_physics_median0.277405321598053
step_physics_min0.277405321598053
survival_time_max3.349999999999996
survival_time_mean3.349999999999996
survival_time_min3.349999999999996
No reset possible
7457213591Andras Beres202-1aido-LFVI_multi-sim-validationsim-0of4host-errornogpu-production-spot-0-010:01:12
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 277, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7456013591Andras Beres202-1aido-LFVI_multi-sim-validationsim-0of4host-errornogpu-production-spot-0-010:01:09
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 277, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7454913591Andras Beres202-1aido-LFVI_multi-sim-validationsim-1of4host-errornogpu-production-spot-0-010:01:08
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 277, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7454513591Andras Beres202-1aido-LFVI_multi-sim-validationsim-1of4host-errornogpu-production-spot-0-010:01:10
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 60, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 33, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 277, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 271, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7438513623Raphael Jeanmobile-segmentation-pedestrianaido-LFV_multi-sim-testing427successyesgpu-production-spot-0-010:22:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median17.550000000000114
in-drivable-lane_median0.0
driven_lanedir_consec_median3.721482379235718
deviation-center-line_median0.9098710203434964


other stats
agent_compute-ego0_max0.01893954249945554
agent_compute-ego0_mean0.01846861903285997
agent_compute-ego0_median0.01893954249945554
agent_compute-ego0_min0.017526772099668835
agent_compute-ego1_max0.02003275061195547
agent_compute-ego1_mean0.01946231436448801
agent_compute-ego1_median0.02003275061195547
agent_compute-ego1_min0.018321441869553083
agent_compute-ego2_max0.02070071074095639
agent_compute-ego2_mean0.02070071074095639
agent_compute-ego2_median0.02070071074095639
agent_compute-ego2_min0.02070071074095639
agent_compute-ego3_max0.021956063129685142
agent_compute-ego3_mean0.021956063129685142
agent_compute-ego3_median0.021956063129685142
agent_compute-ego3_min0.021956063129685142
complete-iteration_max0.5805800042369149
complete-iteration_mean0.4683990189267743
complete-iteration_median0.5805800042369149
complete-iteration_min0.24403704830649295
deviation-center-line_max2.737026918058599
deviation-center-line_mean1.28625189289625
deviation-center-line_min0.07047206182239346
deviation-heading_max9.798206706428967
deviation-heading_mean3.5754364931685516
deviation-heading_median2.7032875208644938
deviation-heading_min0.5067731260160921
distance-from-start_max3.644622220439678
distance-from-start_mean2.147003655943507
distance-from-start_median2.0887273243020696
distance-from-start_min1.0672417324086576
driven_any_max26.74423679140594
driven_any_mean7.787088788073525
driven_any_median3.843641945561038
driven_any_min2.3992024770285405
driven_lanedir_consec_max26.171807009495364
driven_lanedir_consec_mean7.264992262506326
driven_lanedir_consec_min0.6862688565070028
driven_lanedir_max26.171807009495364
driven_lanedir_mean7.264992262506326
driven_lanedir_median3.721482379235718
driven_lanedir_min0.6862688565070028
get_duckie_state_max1.968985254114324e-06
get_duckie_state_mean1.8407774799312176e-06
get_duckie_state_median1.968985254114324e-06
get_duckie_state_min1.5843619315650043e-06
get_robot_state_max0.016041593795472927
get_robot_state_mean0.013405580312623467
get_robot_state_median0.016041593795472927
get_robot_state_min0.00813355334692454
get_state_dump_max0.010852707380598242
get_state_dump_mean0.00955172255099492
get_state_dump_median0.010852707380598242
get_state_dump_min0.006949752891788276
get_ui_image_max0.025496276942166416
get_ui_image_mean0.023261900947681876
get_ui_image_median0.025496276942166416
get_ui_image_min0.018793148958712792
in-drivable-lane_max58.04999999999873
in-drivable-lane_mean9.783333333333122
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-techtrack-000-ego0": {"driven_any": 2.438017601556288, "get_ui_image": 0.025496276942166416, "step_physics": 0.3772987432100556, "survival_time": 17.550000000000114, "driven_lanedir": 2.3933598839202563, "get_state_dump": 0.010852707380598242, "get_robot_state": 0.016041593795472927, "sim_render-ego0": 0.004503360526128249, "sim_render-ego1": 0.00483227859843861, "sim_render-ego2": 0.005172267556190491, "sim_render-ego3": 0.005686968564987183, "get_duckie_state": 1.968985254114324e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.5406836121151626, "agent_compute-ego0": 0.01893954249945554, "agent_compute-ego1": 0.02003275061195547, "agent_compute-ego2": 0.02070071074095639, "agent_compute-ego3": 0.021956063129685142, "complete-iteration": 0.5805800042369149, "set_robot_commands": 0.0031279806386340747, "distance-from-start": 2.043973213665322, "deviation-center-line": 0.972942323961875, "driven_lanedir_consec": 2.3933598839202563, "sim_compute_sim_state": 0.02769751643592661, "sim_compute_performance-ego0": 0.0023007609627463603, "sim_compute_performance-ego1": 0.0022472258318554273, "sim_compute_performance-ego2": 0.002474960278381001, "sim_compute_performance-ego3": 0.0026516643437472258}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 5.171895241978842, "get_ui_image": 0.025496276942166416, "step_physics": 0.3772987432100556, "survival_time": 17.550000000000114, "driven_lanedir": 5.0496048745511795, "get_state_dump": 0.010852707380598242, "get_robot_state": 0.016041593795472927, "sim_render-ego0": 0.004503360526128249, "sim_render-ego1": 0.00483227859843861, "sim_render-ego2": 0.005172267556190491, "sim_render-ego3": 0.005686968564987183, "get_duckie_state": 1.968985254114324e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.507027818557245, "agent_compute-ego0": 0.01893954249945554, "agent_compute-ego1": 0.02003275061195547, "agent_compute-ego2": 0.02070071074095639, "agent_compute-ego3": 0.021956063129685142, "complete-iteration": 0.5805800042369149, "set_robot_commands": 0.0031279806386340747, "distance-from-start": 2.168745223331468, "deviation-center-line": 0.6422298381222129, "driven_lanedir_consec": 5.0496048745511795, "sim_compute_sim_state": 0.02769751643592661, "sim_compute_performance-ego0": 0.0023007609627463603, "sim_compute_performance-ego1": 0.0022472258318554273, "sim_compute_performance-ego2": 0.002474960278381001, "sim_compute_performance-ego3": 0.0026516643437472258}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 7.453791967328305, "get_ui_image": 0.025496276942166416, "step_physics": 0.3772987432100556, "survival_time": 17.550000000000114, "driven_lanedir": 7.263586929790658, "get_state_dump": 0.010852707380598242, "get_robot_state": 0.016041593795472927, "sim_render-ego0": 0.004503360526128249, "sim_render-ego1": 0.00483227859843861, "sim_render-ego2": 0.005172267556190491, "sim_render-ego3": 0.005686968564987183, "get_duckie_state": 1.968985254114324e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.2340362662800173, "agent_compute-ego0": 0.01893954249945554, "agent_compute-ego1": 0.02003275061195547, "agent_compute-ego2": 0.02070071074095639, "agent_compute-ego3": 0.021956063129685142, "complete-iteration": 0.5805800042369149, "set_robot_commands": 0.0031279806386340747, "distance-from-start": 3.644622220439678, "deviation-center-line": 0.8467997167251179, "driven_lanedir_consec": 7.263586929790658, "sim_compute_sim_state": 0.02769751643592661, "sim_compute_performance-ego0": 0.0023007609627463603, "sim_compute_performance-ego1": 0.0022472258318554273, "sim_compute_performance-ego2": 0.002474960278381001, "sim_compute_performance-ego3": 0.0026516643437472258}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 2.3992024770285405, "get_ui_image": 0.025496276942166416, "step_physics": 0.3772987432100556, "survival_time": 17.550000000000114, "driven_lanedir": 2.025326020773506, "get_state_dump": 0.010852707380598242, "get_robot_state": 0.016041593795472927, "sim_render-ego0": 0.004503360526128249, "sim_render-ego1": 0.00483227859843861, "sim_render-ego2": 0.005172267556190491, "sim_render-ego3": 0.005686968564987183, "get_duckie_state": 1.968985254114324e-06, "in-drivable-lane": 0.6499999999999977, "deviation-heading": 2.865891429613825, "agent_compute-ego0": 0.01893954249945554, "agent_compute-ego1": 0.02003275061195547, "agent_compute-ego2": 0.02070071074095639, "agent_compute-ego3": 0.021956063129685142, "complete-iteration": 0.5805800042369149, "set_robot_commands": 0.0031279806386340747, "distance-from-start": 1.823958110877098, "deviation-center-line": 2.448040498687301, "driven_lanedir_consec": 2.025326020773506, "sim_compute_sim_state": 0.02769751643592661, "sim_compute_performance-ego0": 0.0023007609627463603, "sim_compute_performance-ego1": 0.0022472258318554273, "sim_compute_performance-ego2": 0.002474960278381001, "sim_compute_performance-ego3": 0.0026516643437472258}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 26.74423679140594, "get_ui_image": 0.018793148958712792, "step_physics": 0.14894030830643754, "survival_time": 59.99999999999873, "driven_lanedir": 26.171807009495364, "get_state_dump": 0.006949752891788276, "get_robot_state": 0.00813355334692454, "sim_render-ego0": 0.004179693082290129, "sim_render-ego1": 0.004701564353669712, "get_duckie_state": 1.5843619315650043e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.798206706428967, "agent_compute-ego0": 0.017526772099668835, "agent_compute-ego1": 0.018321441869553083, "complete-iteration": 0.24403704830649295, "set_robot_commands": 0.0027989846879893994, "distance-from-start": 1.0672417324086576, "deviation-center-line": 2.737026918058599, "driven_lanedir_consec": 26.171807009495364, "sim_compute_sim_state": 0.006442685806185479, "sim_compute_performance-ego0": 0.0021833710428280795, "sim_compute_performance-ego1": 0.0023118520557235222}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 2.515388649143235, "get_ui_image": 0.018793148958712792, "step_physics": 0.14894030830643754, "survival_time": 59.99999999999873, "driven_lanedir": 0.6862688565070028, "get_state_dump": 0.006949752891788276, "get_robot_state": 0.00813355334692454, "sim_render-ego0": 0.004179693082290129, "sim_render-ego1": 0.004701564353669712, "get_duckie_state": 1.5843619315650043e-06, "in-drivable-lane": 58.04999999999873, "deviation-heading": 0.5067731260160921, "agent_compute-ego0": 0.017526772099668835, "agent_compute-ego1": 0.018321441869553083, "complete-iteration": 0.24403704830649295, "set_robot_commands": 0.0027989846879893994, "distance-from-start": 2.133481434938817, "deviation-center-line": 0.07047206182239346, "driven_lanedir_consec": 0.6862688565070028, "sim_compute_sim_state": 0.006442685806185479, "sim_compute_performance-ego0": 0.0021833710428280795, "sim_compute_performance-ego1": 0.0023118520557235222}}
set_robot_commands_max0.0031279806386340747
set_robot_commands_mean0.0030183153217525163
set_robot_commands_median0.0031279806386340747
set_robot_commands_min0.0027989846879893994
sim_compute_performance-ego0_max0.0023007609627463603
sim_compute_performance-ego0_mean0.002261630989440267
sim_compute_performance-ego0_median0.0023007609627463603
sim_compute_performance-ego0_min0.0021833710428280795
sim_compute_performance-ego1_max0.0023118520557235222
sim_compute_performance-ego1_mean0.0022687679064781253
sim_compute_performance-ego1_median0.0022472258318554273
sim_compute_performance-ego1_min0.0022472258318554273
sim_compute_performance-ego2_max0.002474960278381001
sim_compute_performance-ego2_mean0.002474960278381001
sim_compute_performance-ego2_median0.002474960278381001
sim_compute_performance-ego2_min0.002474960278381001
sim_compute_performance-ego3_max0.0026516643437472258
sim_compute_performance-ego3_mean0.0026516643437472258
sim_compute_performance-ego3_median0.0026516643437472258
sim_compute_performance-ego3_min0.0026516643437472258
sim_compute_sim_state_max0.02769751643592661
sim_compute_sim_state_mean0.020612572892679565
sim_compute_sim_state_median0.02769751643592661
sim_compute_sim_state_min0.006442685806185479
sim_render-ego0_max0.004503360526128249
sim_render-ego0_mean0.004395471378182209
sim_render-ego0_median0.004503360526128249
sim_render-ego0_min0.004179693082290129
sim_render-ego1_max0.00483227859843861
sim_render-ego1_mean0.004788707183515644
sim_render-ego1_median0.00483227859843861
sim_render-ego1_min0.004701564353669712
sim_render-ego2_max0.005172267556190491
sim_render-ego2_mean0.005172267556190491
sim_render-ego2_median0.005172267556190491
sim_render-ego2_min0.005172267556190491
sim_render-ego3_max0.005686968564987183
sim_render-ego3_mean0.005686968564987183
sim_render-ego3_median0.005686968564987183
sim_render-ego3_min0.005686968564987183
simulation-passed1
step_physics_max0.3772987432100556
step_physics_mean0.3011792649088496
step_physics_median0.3772987432100556
step_physics_min0.14894030830643754
survival_time_max59.99999999999873
survival_time_mean31.69999999999966
survival_time_min17.550000000000114
No reset possible
7436113571MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:02:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median10.750000000000018
in-drivable-lane_median0.0
driven_lanedir_consec_median4.484765627042621
deviation-center-line_median0.5653323428345775


other stats
agent_compute-ego0_max0.04870630855913515
agent_compute-ego0_mean0.04870630855913515
agent_compute-ego0_median0.04870630855913515
agent_compute-ego0_min0.04870630855913515
complete-iteration_max0.2082195469626674
complete-iteration_mean0.2082195469626674
complete-iteration_median0.2082195469626674
complete-iteration_min0.2082195469626674
deviation-center-line_max0.5653323428345775
deviation-center-line_mean0.5653323428345775
deviation-center-line_min0.5653323428345775
deviation-heading_max1.9165080076619303
deviation-heading_mean1.9165080076619303
deviation-heading_median1.9165080076619303
deviation-heading_min1.9165080076619303
distance-from-start_max1.122254532859187
distance-from-start_mean1.122254532859187
distance-from-start_median1.122254532859187
distance-from-start_min1.122254532859187
driven_any_max4.572299715588002
driven_any_mean4.572299715588002
driven_any_median4.572299715588002
driven_any_min4.572299715588002
driven_lanedir_consec_max4.484765627042621
driven_lanedir_consec_mean4.484765627042621
driven_lanedir_consec_min4.484765627042621
driven_lanedir_max4.484765627042621
driven_lanedir_mean4.484765627042621
driven_lanedir_median4.484765627042621
driven_lanedir_min4.484765627042621
get_duckie_state_max0.004490843525639287
get_duckie_state_mean0.004490843525639287
get_duckie_state_median0.004490843525639287
get_duckie_state_min0.004490843525639287
get_robot_state_max0.0040182725146964745
get_robot_state_mean0.0040182725146964745
get_robot_state_median0.0040182725146964745
get_robot_state_min0.0040182725146964745
get_state_dump_max0.005900305730325205
get_state_dump_mean0.005900305730325205
get_state_dump_median0.005900305730325205
get_state_dump_min0.005900305730325205
get_ui_image_max0.019336793157789443
get_ui_image_mean0.019336793157789443
get_ui_image_median0.019336793157789443
get_ui_image_min0.019336793157789443
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 4.572299715588002, "get_ui_image": 0.019336793157789443, "step_physics": 0.11118065317471822, "survival_time": 10.750000000000018, "driven_lanedir": 4.484765627042621, "get_state_dump": 0.005900305730325205, "get_robot_state": 0.0040182725146964745, "sim_render-ego0": 0.003991944922341241, "get_duckie_state": 0.004490843525639287, "in-drivable-lane": 0.0, "deviation-heading": 1.9165080076619303, "agent_compute-ego0": 0.04870630855913515, "complete-iteration": 0.2082195469626674, "set_robot_commands": 0.0024940636422899035, "distance-from-start": 1.122254532859187, "deviation-center-line": 0.5653323428345775, "driven_lanedir_consec": 4.484765627042621, "sim_compute_sim_state": 0.005916806282820525, "sim_compute_performance-ego0": 0.0020841669153284143}}
set_robot_commands_max0.0024940636422899035
set_robot_commands_mean0.0024940636422899035
set_robot_commands_median0.0024940636422899035
set_robot_commands_min0.0024940636422899035
sim_compute_performance-ego0_max0.0020841669153284143
sim_compute_performance-ego0_mean0.0020841669153284143
sim_compute_performance-ego0_median0.0020841669153284143
sim_compute_performance-ego0_min0.0020841669153284143
sim_compute_sim_state_max0.005916806282820525
sim_compute_sim_state_mean0.005916806282820525
sim_compute_sim_state_median0.005916806282820525
sim_compute_sim_state_min0.005916806282820525
sim_render-ego0_max0.003991944922341241
sim_render-ego0_mean0.003991944922341241
sim_render-ego0_median0.003991944922341241
sim_render-ego0_min0.003991944922341241
simulation-passed1
step_physics_max0.11118065317471822
step_physics_mean0.11118065317471822
step_physics_median0.11118065317471822
step_physics_min0.11118065317471822
survival_time_max10.750000000000018
survival_time_mean10.750000000000018
survival_time_min10.750000000000018
No reset possible
7434313572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-0of4successnogpu-production-spot-0-010:03:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.399999999999982
in-drivable-lane_median0.0
driven_lanedir_consec_median2.982710506007079
deviation-center-line_median0.4404650895905106


other stats
agent_compute-ego0_max0.05266284302577076
agent_compute-ego0_mean0.05266284302577076
agent_compute-ego0_median0.05266284302577076
agent_compute-ego0_min0.05266284302577076
agent_compute-npc0_max0.028504917285586365
agent_compute-npc0_mean0.028504917285586365
agent_compute-npc0_median0.028504917285586365
agent_compute-npc0_min0.028504917285586365
complete-iteration_max0.2915168528588826
complete-iteration_mean0.2915168528588826
complete-iteration_median0.2915168528588826
complete-iteration_min0.2915168528588826
deviation-center-line_max0.4404650895905106
deviation-center-line_mean0.4404650895905106
deviation-center-line_min0.4404650895905106
deviation-heading_max1.1465388696372765
deviation-heading_mean1.1465388696372765
deviation-heading_median1.1465388696372765
deviation-heading_min1.1465388696372765
distance-from-start_max1.0944378887819075
distance-from-start_mean1.0944378887819075
distance-from-start_median1.0944378887819075
distance-from-start_min1.0944378887819075
driven_any_max3.039278662284294
driven_any_mean3.039278662284294
driven_any_median3.039278662284294
driven_any_min3.039278662284294
driven_lanedir_consec_max2.982710506007079
driven_lanedir_consec_mean2.982710506007079
driven_lanedir_consec_min2.982710506007079
driven_lanedir_max2.982710506007079
driven_lanedir_mean2.982710506007079
driven_lanedir_median2.982710506007079
driven_lanedir_min2.982710506007079
get_duckie_state_max1.8561446426698824e-06
get_duckie_state_mean1.8561446426698824e-06
get_duckie_state_median1.8561446426698824e-06
get_duckie_state_min1.8561446426698824e-06
get_robot_state_max0.008552975302574619
get_robot_state_mean0.008552975302574619
get_robot_state_median0.008552975302574619
get_robot_state_min0.008552975302574619
get_state_dump_max0.00755919226063978
get_state_dump_mean0.00755919226063978
get_state_dump_median0.00755919226063978
get_state_dump_min0.00755919226063978
get_ui_image_max0.021168620794411473
get_ui_image_mean0.021168620794411473
get_ui_image_median0.021168620794411473
get_ui_image_min0.021168620794411473
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 3.039278662284294, "get_ui_image": 0.021168620794411473, "step_physics": 0.14371580405523313, "survival_time": 7.399999999999982, "driven_lanedir": 2.982710506007079, "get_state_dump": 0.00755919226063978, "get_robot_state": 0.008552975302574619, "sim_render-ego0": 0.004459309097904487, "sim_render-npc0": 0.004618875132311111, "get_duckie_state": 1.8561446426698824e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.1465388696372765, "agent_compute-ego0": 0.05266284302577076, "agent_compute-npc0": 0.028504917285586365, "complete-iteration": 0.2915168528588826, "set_robot_commands": 0.002718659855375354, "distance-from-start": 1.0944378887819075, "deviation-center-line": 0.4404650895905106, "driven_lanedir_consec": 2.982710506007079, "sim_compute_sim_state": 0.010005366882221811, "sim_compute_performance-ego0": 0.0022914201621241216, "sim_compute_performance-npc0": 0.002332440958727126}}
set_robot_commands_max0.002718659855375354
set_robot_commands_mean0.002718659855375354
set_robot_commands_median0.002718659855375354
set_robot_commands_min0.002718659855375354
sim_compute_performance-ego0_max0.0022914201621241216
sim_compute_performance-ego0_mean0.0022914201621241216
sim_compute_performance-ego0_median0.0022914201621241216
sim_compute_performance-ego0_min0.0022914201621241216
sim_compute_performance-npc0_max0.002332440958727126
sim_compute_performance-npc0_mean0.002332440958727126
sim_compute_performance-npc0_median0.002332440958727126
sim_compute_performance-npc0_min0.002332440958727126
sim_compute_sim_state_max0.010005366882221811
sim_compute_sim_state_mean0.010005366882221811
sim_compute_sim_state_median0.010005366882221811
sim_compute_sim_state_min0.010005366882221811
sim_render-ego0_max0.004459309097904487
sim_render-ego0_mean0.004459309097904487
sim_render-ego0_median0.004459309097904487
sim_render-ego0_min0.004459309097904487
sim_render-npc0_max0.004618875132311111
sim_render-npc0_mean0.004618875132311111
sim_render-npc0_median0.004618875132311111
sim_render-npc0_min0.004618875132311111
simulation-passed1
step_physics_max0.14371580405523313
step_physics_mean0.14371580405523313
step_physics_median0.14371580405523313
step_physics_min0.14371580405523313
survival_time_max7.399999999999982
survival_time_mean7.399999999999982
survival_time_min7.399999999999982
No reset possible
7432513572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-0of4successnogpu-production-spot-0-010:02:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.199999999999982
in-drivable-lane_median0.0
driven_lanedir_consec_median2.9223570732086666
deviation-center-line_median0.29881493720980257


other stats
agent_compute-ego0_max0.050829790378439016
agent_compute-ego0_mean0.050829790378439016
agent_compute-ego0_median0.050829790378439016
agent_compute-ego0_min0.050829790378439016
agent_compute-npc0_max0.02741106460834372
agent_compute-npc0_mean0.02741106460834372
agent_compute-npc0_median0.02741106460834372
agent_compute-npc0_min0.02741106460834372
complete-iteration_max0.28790598244502624
complete-iteration_mean0.28790598244502624
complete-iteration_median0.28790598244502624
complete-iteration_min0.28790598244502624
deviation-center-line_max0.29881493720980257
deviation-center-line_mean0.29881493720980257
deviation-center-line_min0.29881493720980257
deviation-heading_max1.0799879367588607
deviation-heading_mean1.0799879367588607
deviation-heading_median1.0799879367588607
deviation-heading_min1.0799879367588607
distance-from-start_max1.0614228082203512
distance-from-start_mean1.0614228082203512
distance-from-start_median1.0614228082203512
distance-from-start_min1.0614228082203512
driven_any_max2.9619831918359942
driven_any_mean2.9619831918359942
driven_any_median2.9619831918359942
driven_any_min2.9619831918359942
driven_lanedir_consec_max2.9223570732086666
driven_lanedir_consec_mean2.9223570732086666
driven_lanedir_consec_min2.9223570732086666
driven_lanedir_max2.9223570732086666
driven_lanedir_mean2.9223570732086666
driven_lanedir_median2.9223570732086666
driven_lanedir_min2.9223570732086666
get_duckie_state_max1.6804399161503234e-06
get_duckie_state_mean1.6804399161503234e-06
get_duckie_state_median1.6804399161503234e-06
get_duckie_state_min1.6804399161503234e-06
get_robot_state_max0.008286777035943393
get_robot_state_mean0.008286777035943393
get_robot_state_median0.008286777035943393
get_robot_state_min0.008286777035943393
get_state_dump_max0.007173850618559739
get_state_dump_mean0.007173850618559739
get_state_dump_median0.007173850618559739
get_state_dump_min0.007173850618559739
get_ui_image_max0.020688693276767072
get_ui_image_mean0.020688693276767072
get_ui_image_median0.020688693276767072
get_ui_image_min0.020688693276767072
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 2.9619831918359942, "get_ui_image": 0.020688693276767072, "step_physics": 0.1449112053575187, "survival_time": 7.199999999999982, "driven_lanedir": 2.9223570732086666, "get_state_dump": 0.007173850618559739, "get_robot_state": 0.008286777035943393, "sim_render-ego0": 0.0042918797197013065, "sim_render-npc0": 0.004408810056489089, "get_duckie_state": 1.6804399161503234e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.0799879367588607, "agent_compute-ego0": 0.050829790378439016, "agent_compute-npc0": 0.02741106460834372, "complete-iteration": 0.28790598244502624, "set_robot_commands": 0.0026960767548659752, "distance-from-start": 1.0614228082203512, "deviation-center-line": 0.29881493720980257, "driven_lanedir_consec": 2.9223570732086666, "sim_compute_sim_state": 0.00982524115463783, "sim_compute_performance-ego0": 0.0022885289685479525, "sim_compute_performance-npc0": 0.002280758167135304}}
set_robot_commands_max0.0026960767548659752
set_robot_commands_mean0.0026960767548659752
set_robot_commands_median0.0026960767548659752
set_robot_commands_min0.0026960767548659752
sim_compute_performance-ego0_max0.0022885289685479525
sim_compute_performance-ego0_mean0.0022885289685479525
sim_compute_performance-ego0_median0.0022885289685479525
sim_compute_performance-ego0_min0.0022885289685479525
sim_compute_performance-npc0_max0.002280758167135304
sim_compute_performance-npc0_mean0.002280758167135304
sim_compute_performance-npc0_median0.002280758167135304
sim_compute_performance-npc0_min0.002280758167135304
sim_compute_sim_state_max0.00982524115463783
sim_compute_sim_state_mean0.00982524115463783
sim_compute_sim_state_median0.00982524115463783
sim_compute_sim_state_min0.00982524115463783
sim_render-ego0_max0.0042918797197013065
sim_render-ego0_mean0.0042918797197013065
sim_render-ego0_median0.0042918797197013065
sim_render-ego0_min0.0042918797197013065
sim_render-npc0_max0.004408810056489089
sim_render-npc0_mean0.004408810056489089
sim_render-npc0_median0.004408810056489089
sim_render-npc0_min0.004408810056489089
simulation-passed1
step_physics_max0.1449112053575187
step_physics_mean0.1449112053575187
step_physics_median0.1449112053575187
step_physics_min0.1449112053575187
survival_time_max7.199999999999982
survival_time_mean7.199999999999982
survival_time_min7.199999999999982
No reset possible
7429713572MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.549999999999981
in-drivable-lane_median0.0
driven_lanedir_consec_median3.172669109889953
deviation-center-line_median0.44850830948923226


other stats
agent_compute-ego0_max0.050069071744617666
agent_compute-ego0_mean0.050069071744617666
agent_compute-ego0_median0.050069071744617666
agent_compute-ego0_min0.050069071744617666
agent_compute-npc0_max0.03597457471646761
agent_compute-npc0_mean0.03597457471646761
agent_compute-npc0_median0.03597457471646761
agent_compute-npc0_min0.03597457471646761
agent_compute-npc1_max0.03624804866941352
agent_compute-npc1_mean0.03624804866941352
agent_compute-npc1_median0.03624804866941352
agent_compute-npc1_min0.03624804866941352
agent_compute-npc2_max0.03722949561319853
agent_compute-npc2_mean0.03722949561319853
agent_compute-npc2_median0.03722949561319853
agent_compute-npc2_min0.03722949561319853
complete-iteration_max0.5099264195090846
complete-iteration_mean0.5099264195090846
complete-iteration_median0.5099264195090846
complete-iteration_min0.5099264195090846
deviation-center-line_max0.44850830948923226
deviation-center-line_mean0.44850830948923226
deviation-center-line_min0.44850830948923226
deviation-heading_max1.1111482871678016
deviation-heading_mean1.1111482871678016
deviation-heading_median1.1111482871678016
deviation-heading_min1.1111482871678016
distance-from-start_max2.4224856560163515
distance-from-start_mean2.4224856560163515
distance-from-start_median2.4224856560163515
distance-from-start_min2.4224856560163515
driven_any_max3.225448411495144
driven_any_mean3.225448411495144
driven_any_median3.225448411495144
driven_any_min3.225448411495144
driven_lanedir_consec_max3.172669109889953
driven_lanedir_consec_mean3.172669109889953
driven_lanedir_consec_min3.172669109889953
driven_lanedir_max3.172669109889953
driven_lanedir_mean3.172669109889953
driven_lanedir_median3.172669109889953
driven_lanedir_min3.172669109889953
get_duckie_state_max3.050816686529862e-06
get_duckie_state_mean3.050816686529862e-06
get_duckie_state_median3.050816686529862e-06
get_duckie_state_min3.050816686529862e-06
get_robot_state_max0.016065793602090133
get_robot_state_mean0.016065793602090133
get_robot_state_median0.016065793602090133
get_robot_state_min0.016065793602090133
get_state_dump_max0.010454471174039338
get_state_dump_mean0.010454471174039338
get_state_dump_median0.010454471174039338
get_state_dump_min0.010454471174039338
get_ui_image_max0.023238015802283036
get_ui_image_mean0.023238015802283036
get_ui_image_median0.023238015802283036
get_ui_image_min0.023238015802283036
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 3.225448411495144, "get_ui_image": 0.023238015802283036, "step_physics": 0.23828736261317604, "survival_time": 7.549999999999981, "driven_lanedir": 3.172669109889953, "get_state_dump": 0.010454471174039338, "get_robot_state": 0.016065793602090133, "sim_render-ego0": 0.00435948371887207, "sim_render-npc0": 0.004364059159630223, "sim_render-npc1": 0.0044350592713606985, "sim_render-npc2": 0.004615733498021176, "get_duckie_state": 3.050816686529862e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.1111482871678016, "agent_compute-ego0": 0.050069071744617666, "agent_compute-npc0": 0.03597457471646761, "agent_compute-npc1": 0.03624804866941352, "agent_compute-npc2": 0.03722949561319853, "complete-iteration": 0.5099264195090846, "set_robot_commands": 0.0028833966506154915, "distance-from-start": 2.4224856560163515, "deviation-center-line": 0.44850830948923226, "driven_lanedir_consec": 3.172669109889953, "sim_compute_sim_state": 0.024460516477886, "sim_compute_performance-ego0": 0.0023309619803177683, "sim_compute_performance-npc0": 0.0022494102779187656, "sim_compute_performance-npc1": 0.002250251017118755, "sim_compute_performance-npc2": 0.002314660109971699}}
set_robot_commands_max0.0028833966506154915
set_robot_commands_mean0.0028833966506154915
set_robot_commands_median0.0028833966506154915
set_robot_commands_min0.0028833966506154915
sim_compute_performance-ego0_max0.0023309619803177683
sim_compute_performance-ego0_mean0.0023309619803177683
sim_compute_performance-ego0_median0.0023309619803177683
sim_compute_performance-ego0_min0.0023309619803177683
sim_compute_performance-npc0_max0.0022494102779187656
sim_compute_performance-npc0_mean0.0022494102779187656
sim_compute_performance-npc0_median0.0022494102779187656
sim_compute_performance-npc0_min0.0022494102779187656
sim_compute_performance-npc1_max0.002250251017118755
sim_compute_performance-npc1_mean0.002250251017118755
sim_compute_performance-npc1_median0.002250251017118755
sim_compute_performance-npc1_min0.002250251017118755
sim_compute_performance-npc2_max0.002314660109971699
sim_compute_performance-npc2_mean0.002314660109971699
sim_compute_performance-npc2_median0.002314660109971699
sim_compute_performance-npc2_min0.002314660109971699
sim_compute_sim_state_max0.024460516477886
sim_compute_sim_state_mean0.024460516477886
sim_compute_sim_state_median0.024460516477886
sim_compute_sim_state_min0.024460516477886
sim_render-ego0_max0.00435948371887207
sim_render-ego0_mean0.00435948371887207
sim_render-ego0_median0.00435948371887207
sim_render-ego0_min0.00435948371887207
sim_render-npc0_max0.004364059159630223
sim_render-npc0_mean0.004364059159630223
sim_render-npc0_median0.004364059159630223
sim_render-npc0_min0.004364059159630223
sim_render-npc1_max0.0044350592713606985
sim_render-npc1_mean0.0044350592713606985
sim_render-npc1_median0.0044350592713606985
sim_render-npc1_min0.0044350592713606985
sim_render-npc2_max0.004615733498021176
sim_render-npc2_mean0.004615733498021176
sim_render-npc2_median0.004615733498021176
sim_render-npc2_min0.004615733498021176
simulation-passed1
step_physics_max0.23828736261317604
step_physics_mean0.23828736261317604
step_physics_median0.23828736261317604
step_physics_min0.23828736261317604
survival_time_max7.549999999999981
survival_time_mean7.549999999999981
survival_time_min7.549999999999981
No reset possible
7422113579Andras Beres202-1aido-LF-sim-testingsim-2of4successnogpu-production-spot-0-010:08:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median26.790253709557703
survival_time_median59.99999999999873
deviation-center-line_median3.6306465708304176
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.022028095418467907
agent_compute-ego0_mean0.022028095418467907
agent_compute-ego0_median0.022028095418467907
agent_compute-ego0_min0.022028095418467907
complete-iteration_max0.1759146242514935
complete-iteration_mean0.1759146242514935
complete-iteration_median0.1759146242514935
complete-iteration_min0.1759146242514935
deviation-center-line_max3.6306465708304176
deviation-center-line_mean3.6306465708304176
deviation-center-line_min3.6306465708304176
deviation-heading_max10.160701941690624
deviation-heading_mean10.160701941690624
deviation-heading_median10.160701941690624
deviation-heading_min10.160701941690624
distance-from-start_max1.1113850264628602
distance-from-start_mean1.1113850264628602
distance-from-start_median1.1113850264628602
distance-from-start_min1.1113850264628602
driven_any_max27.38678599649427
driven_any_mean27.38678599649427
driven_any_median27.38678599649427
driven_any_min27.38678599649427
driven_lanedir_consec_max26.790253709557703
driven_lanedir_consec_mean26.790253709557703
driven_lanedir_consec_min26.790253709557703
driven_lanedir_max26.790253709557703
driven_lanedir_mean26.790253709557703
driven_lanedir_median26.790253709557703
driven_lanedir_min26.790253709557703
get_duckie_state_max1.398748799624987e-06
get_duckie_state_mean1.398748799624987e-06
get_duckie_state_median1.398748799624987e-06
get_duckie_state_min1.398748799624987e-06
get_robot_state_max0.003847615903462101
get_robot_state_mean0.003847615903462101
get_robot_state_median0.003847615903462101
get_robot_state_min0.003847615903462101
get_state_dump_max0.004724233176289351
get_state_dump_mean0.004724233176289351
get_state_dump_median0.004724233176289351
get_state_dump_min0.004724233176289351
get_ui_image_max0.01869003480916019
get_ui_image_mean0.01869003480916019
get_ui_image_median0.01869003480916019
get_ui_image_min0.01869003480916019
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 27.38678599649427, "get_ui_image": 0.01869003480916019, "step_physics": 0.11257660240058996, "survival_time": 59.99999999999873, "driven_lanedir": 26.790253709557703, "get_state_dump": 0.004724233176289351, "get_robot_state": 0.003847615903462101, "sim_render-ego0": 0.003927393420153514, "get_duckie_state": 1.398748799624987e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.160701941690624, "agent_compute-ego0": 0.022028095418467907, "complete-iteration": 0.1759146242514935, "set_robot_commands": 0.0024487138489303144, "distance-from-start": 1.1113850264628602, "deviation-center-line": 3.6306465708304176, "driven_lanedir_consec": 26.790253709557703, "sim_compute_sim_state": 0.005579216295634579, "sim_compute_performance-ego0": 0.002007264082477452}}
set_robot_commands_max0.0024487138489303144
set_robot_commands_mean0.0024487138489303144
set_robot_commands_median0.0024487138489303144
set_robot_commands_min0.0024487138489303144
sim_compute_performance-ego0_max0.002007264082477452
sim_compute_performance-ego0_mean0.002007264082477452
sim_compute_performance-ego0_median0.002007264082477452
sim_compute_performance-ego0_min0.002007264082477452
sim_compute_sim_state_max0.005579216295634579
sim_compute_sim_state_mean0.005579216295634579
sim_compute_sim_state_median0.005579216295634579
sim_compute_sim_state_min0.005579216295634579
sim_render-ego0_max0.003927393420153514
sim_render-ego0_mean0.003927393420153514
sim_render-ego0_median0.003927393420153514
sim_render-ego0_min0.003927393420153514
simulation-passed1
step_physics_max0.11257660240058996
step_physics_mean0.11257660240058996
step_physics_median0.11257660240058996
step_physics_min0.11257660240058996
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7420013586Andras Beres202-1aido-LFP-sim-validationsim-2of4successnogpu-production-spot-0-010:01:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.999999999999987
in-drivable-lane_median0.0
driven_lanedir_consec_median2.574327152397826
deviation-center-line_median0.24496256329516816


other stats
agent_compute-ego0_max0.017382684817984086
agent_compute-ego0_mean0.017382684817984086
agent_compute-ego0_median0.017382684817984086
agent_compute-ego0_min0.017382684817984086
complete-iteration_max0.1936381690758319
complete-iteration_mean0.1936381690758319
complete-iteration_median0.1936381690758319
complete-iteration_min0.1936381690758319
deviation-center-line_max0.24496256329516816
deviation-center-line_mean0.24496256329516816
deviation-center-line_min0.24496256329516816
deviation-heading_max0.7792304379731055
deviation-heading_mean0.7792304379731055
deviation-heading_median0.7792304379731055
deviation-heading_min0.7792304379731055
distance-from-start_max2.0114752172098185
distance-from-start_mean2.0114752172098185
distance-from-start_median2.0114752172098185
distance-from-start_min2.0114752172098185
driven_any_max2.6042882173950277
driven_any_mean2.6042882173950277
driven_any_median2.6042882173950277
driven_any_min2.6042882173950277
driven_lanedir_consec_max2.574327152397826
driven_lanedir_consec_mean2.574327152397826
driven_lanedir_consec_min2.574327152397826
driven_lanedir_max2.574327152397826
driven_lanedir_mean2.574327152397826
driven_lanedir_median2.574327152397826
driven_lanedir_min2.574327152397826
get_duckie_state_max0.02520132261859484
get_duckie_state_mean0.02520132261859484
get_duckie_state_median0.02520132261859484
get_duckie_state_min0.02520132261859484
get_robot_state_max0.0039762662461966525
get_robot_state_mean0.0039762662461966525
get_robot_state_median0.0039762662461966525
get_robot_state_min0.0039762662461966525
get_state_dump_max0.009066455620379488
get_state_dump_mean0.009066455620379488
get_state_dump_median0.009066455620379488
get_state_dump_min0.009066455620379488
get_ui_image_max0.01973977561824578
get_ui_image_mean0.01973977561824578
get_ui_image_median0.01973977561824578
get_ui_image_min0.01973977561824578
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.6042882173950277, "get_ui_image": 0.01973977561824578, "step_physics": 0.10075832989590228, "survival_time": 5.999999999999987, "driven_lanedir": 2.574327152397826, "get_state_dump": 0.009066455620379488, "get_robot_state": 0.0039762662461966525, "sim_render-ego0": 0.0040212426303832, "get_duckie_state": 0.02520132261859484, "in-drivable-lane": 0.0, "deviation-heading": 0.7792304379731055, "agent_compute-ego0": 0.017382684817984086, "complete-iteration": 0.1936381690758319, "set_robot_commands": 0.0023722490988487056, "distance-from-start": 2.0114752172098185, "deviation-center-line": 0.24496256329516816, "driven_lanedir_consec": 2.574327152397826, "sim_compute_sim_state": 0.008961074608416596, "sim_compute_performance-ego0": 0.0020485121356554267}}
set_robot_commands_max0.0023722490988487056
set_robot_commands_mean0.0023722490988487056
set_robot_commands_median0.0023722490988487056
set_robot_commands_min0.0023722490988487056
sim_compute_performance-ego0_max0.0020485121356554267
sim_compute_performance-ego0_mean0.0020485121356554267
sim_compute_performance-ego0_median0.0020485121356554267
sim_compute_performance-ego0_min0.0020485121356554267
sim_compute_sim_state_max0.008961074608416596
sim_compute_sim_state_mean0.008961074608416596
sim_compute_sim_state_median0.008961074608416596
sim_compute_sim_state_min0.008961074608416596
sim_render-ego0_max0.0040212426303832
sim_render-ego0_mean0.0040212426303832
sim_render-ego0_median0.0040212426303832
sim_render-ego0_min0.0040212426303832
simulation-passed1
step_physics_max0.10075832989590228
step_physics_mean0.10075832989590228
step_physics_median0.10075832989590228
step_physics_min0.10075832989590228
survival_time_max5.999999999999987
survival_time_mean5.999999999999987
survival_time_min5.999999999999987
No reset possible
7417813621Raphael Jeanmobile-segmentation-pedestrianaido-LFVI-sim-validationsim-1of4successnogpu-production-spot-0-010:02:33
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.4499999999999957
in-drivable-lane_median0.04999999999999982
driven_lanedir_consec_median0.7140072410798259
deviation-center-line_median0.22823992531772797


other stats
agent_compute-ego0_max0.017525666100638253
agent_compute-ego0_mean0.017525666100638253
agent_compute-ego0_median0.017525666100638253
agent_compute-ego0_min0.017525666100638253
agent_compute-npc0_max0.04634628977094378
agent_compute-npc0_mean0.04634628977094378
agent_compute-npc0_median0.04634628977094378
agent_compute-npc0_min0.04634628977094378
agent_compute-npc1_max0.04573499815804618
agent_compute-npc1_mean0.04573499815804618
agent_compute-npc1_median0.04573499815804618
agent_compute-npc1_min0.04573499815804618
agent_compute-npc2_max0.04484520299094064
agent_compute-npc2_mean0.04484520299094064
agent_compute-npc2_median0.04484520299094064
agent_compute-npc2_min0.04484520299094064
complete-iteration_max0.5352699756622314
complete-iteration_mean0.5352699756622314
complete-iteration_median0.5352699756622314
complete-iteration_min0.5352699756622314
deviation-center-line_max0.22823992531772797
deviation-center-line_mean0.22823992531772797
deviation-center-line_min0.22823992531772797
deviation-heading_max2.5145993098346184
deviation-heading_mean2.5145993098346184
deviation-heading_median2.5145993098346184
deviation-heading_min2.5145993098346184
distance-from-start_max0.9900047161300316
distance-from-start_mean0.9900047161300316
distance-from-start_median0.9900047161300316
distance-from-start_min0.9900047161300316
driven_any_max1.0325357269505728
driven_any_mean1.0325357269505728
driven_any_median1.0325357269505728
driven_any_min1.0325357269505728
driven_lanedir_consec_max0.7140072410798259
driven_lanedir_consec_mean0.7140072410798259
driven_lanedir_consec_min0.7140072410798259
driven_lanedir_max0.7431033905592002
driven_lanedir_mean0.7431033905592002
driven_lanedir_median0.7431033905592002
driven_lanedir_min0.7431033905592002
get_duckie_state_max3.1539372035435266e-06
get_duckie_state_mean3.1539372035435266e-06
get_duckie_state_median3.1539372035435266e-06
get_duckie_state_min3.1539372035435266e-06
get_robot_state_max0.01450815200805664
get_robot_state_mean0.01450815200805664
get_robot_state_median0.01450815200805664
get_robot_state_min0.01450815200805664
get_state_dump_max0.009813189506530762
get_state_dump_mean0.009813189506530762
get_state_dump_median0.009813189506530762
get_state_dump_min0.009813189506530762
get_ui_image_max0.02642662525177002
get_ui_image_mean0.02642662525177002
get_ui_image_median0.02642662525177002
get_ui_image_min0.02642662525177002
in-drivable-lane_max0.04999999999999982
in-drivable-lane_mean0.04999999999999982
in-drivable-lane_min0.04999999999999982
per-episodes
details{"LFVI-norm-4way-000-ego0": {"driven_any": 1.0325357269505728, "get_ui_image": 0.02642662525177002, "step_physics": 0.25908216067722867, "survival_time": 3.4499999999999957, "driven_lanedir": 0.7431033905592002, "get_state_dump": 0.009813189506530762, "get_robot_state": 0.01450815200805664, "sim_render-ego0": 0.003751546995980399, "sim_render-npc0": 0.004445317813328334, "sim_render-npc1": 0.004084757396153041, "sim_render-npc2": 0.003966729981558664, "get_duckie_state": 3.1539372035435266e-06, "in-drivable-lane": 0.04999999999999982, "deviation-heading": 2.5145993098346184, "agent_compute-ego0": 0.017525666100638253, "agent_compute-npc0": 0.04634628977094378, "agent_compute-npc1": 0.04573499815804618, "agent_compute-npc2": 0.04484520299094064, "complete-iteration": 0.5352699756622314, "set_robot_commands": 0.002550523621695382, "distance-from-start": 0.9900047161300316, "deviation-center-line": 0.22823992531772797, "driven_lanedir_consec": 0.7140072410798259, "sim_compute_sim_state": 0.03616250242505755, "sim_compute_performance-ego0": 0.002063465118408203, "sim_compute_performance-npc0": 0.002053802353995187, "sim_compute_performance-npc1": 0.002223161288670131, "sim_compute_performance-npc2": 0.0021382672446114675}}
set_robot_commands_max0.002550523621695382
set_robot_commands_mean0.002550523621695382
set_robot_commands_median0.002550523621695382
set_robot_commands_min0.002550523621695382
sim_compute_performance-ego0_max0.002063465118408203
sim_compute_performance-ego0_mean0.002063465118408203
sim_compute_performance-ego0_median0.002063465118408203
sim_compute_performance-ego0_min0.002063465118408203
sim_compute_performance-npc0_max0.002053802353995187
sim_compute_performance-npc0_mean0.002053802353995187
sim_compute_performance-npc0_median0.002053802353995187
sim_compute_performance-npc0_min0.002053802353995187
sim_compute_performance-npc1_max0.002223161288670131
sim_compute_performance-npc1_mean0.002223161288670131
sim_compute_performance-npc1_median0.002223161288670131
sim_compute_performance-npc1_min0.002223161288670131
sim_compute_performance-npc2_max0.0021382672446114675
sim_compute_performance-npc2_mean0.0021382672446114675
sim_compute_performance-npc2_median0.0021382672446114675
sim_compute_performance-npc2_min0.0021382672446114675
sim_compute_sim_state_max0.03616250242505755
sim_compute_sim_state_mean0.03616250242505755
sim_compute_sim_state_median0.03616250242505755
sim_compute_sim_state_min0.03616250242505755
sim_render-ego0_max0.003751546995980399
sim_render-ego0_mean0.003751546995980399
sim_render-ego0_median0.003751546995980399
sim_render-ego0_min0.003751546995980399
sim_render-npc0_max0.004445317813328334
sim_render-npc0_mean0.004445317813328334
sim_render-npc0_median0.004445317813328334
sim_render-npc0_min0.004445317813328334
sim_render-npc1_max0.004084757396153041
sim_render-npc1_mean0.004084757396153041
sim_render-npc1_median0.004084757396153041
sim_render-npc1_min0.004084757396153041
sim_render-npc2_max0.003966729981558664
sim_render-npc2_mean0.003966729981558664
sim_render-npc2_median0.003966729981558664
sim_render-npc2_min0.003966729981558664
simulation-passed1
step_physics_max0.25908216067722867
step_physics_mean0.25908216067722867
step_physics_median0.25908216067722867
step_physics_min0.25908216067722867
survival_time_max3.4499999999999957
survival_time_mean3.4499999999999957
survival_time_min3.4499999999999957
No reset possible
7414713587Andras Beres202-1aido-LFV-sim-validationsim-0of4successnogpu-production-spot-0-010:02:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.649999999999981
in-drivable-lane_median0.0
driven_lanedir_consec_median2.990405683527092
deviation-center-line_median0.4011805990054704


other stats
agent_compute-ego0_max0.019167088843011236
agent_compute-ego0_mean0.019167088843011236
agent_compute-ego0_median0.019167088843011236
agent_compute-ego0_min0.019167088843011236
agent_compute-npc0_max0.026909823541517382
agent_compute-npc0_mean0.026909823541517382
agent_compute-npc0_median0.026909823541517382
agent_compute-npc0_min0.026909823541517382
complete-iteration_max0.2573875250754418
complete-iteration_mean0.2573875250754418
complete-iteration_median0.2573875250754418
complete-iteration_min0.2573875250754418
deviation-center-line_max0.4011805990054704
deviation-center-line_mean0.4011805990054704
deviation-center-line_min0.4011805990054704
deviation-heading_max1.5424671396363463
deviation-heading_mean1.5424671396363463
deviation-heading_median1.5424671396363463
deviation-heading_min1.5424671396363463
distance-from-start_max1.0799849996669275
distance-from-start_mean1.0799849996669275
distance-from-start_median1.0799849996669275
distance-from-start_min1.0799849996669275
driven_any_max3.0684433128697473
driven_any_mean3.0684433128697473
driven_any_median3.0684433128697473
driven_any_min3.0684433128697473
driven_lanedir_consec_max2.990405683527092
driven_lanedir_consec_mean2.990405683527092
driven_lanedir_consec_min2.990405683527092
driven_lanedir_max2.990405683527092
driven_lanedir_mean2.990405683527092
driven_lanedir_median2.990405683527092
driven_lanedir_min2.990405683527092
get_duckie_state_max1.6193885307807429e-06
get_duckie_state_mean1.6193885307807429e-06
get_duckie_state_median1.6193885307807429e-06
get_duckie_state_min1.6193885307807429e-06
get_robot_state_max0.008191721779959542
get_robot_state_mean0.008191721779959542
get_robot_state_median0.008191721779959542
get_robot_state_min0.008191721779959542
get_state_dump_max0.00709382744578572
get_state_dump_mean0.00709382744578572
get_state_dump_median0.00709382744578572
get_state_dump_min0.00709382744578572
get_ui_image_max0.020291204576368457
get_ui_image_mean0.020291204576368457
get_ui_image_median0.020291204576368457
get_ui_image_min0.020291204576368457
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 3.0684433128697473, "get_ui_image": 0.020291204576368457, "step_physics": 0.14739243086282308, "survival_time": 7.649999999999981, "driven_lanedir": 2.990405683527092, "get_state_dump": 0.00709382744578572, "get_robot_state": 0.008191721779959542, "sim_render-ego0": 0.00427320096399877, "sim_render-npc0": 0.004319195623521681, "get_duckie_state": 1.6193885307807429e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.5424671396363463, "agent_compute-ego0": 0.019167088843011236, "agent_compute-npc0": 0.026909823541517382, "complete-iteration": 0.2573875250754418, "set_robot_commands": 0.002769236440782423, "distance-from-start": 1.0799849996669275, "deviation-center-line": 0.4011805990054704, "driven_lanedir_consec": 2.990405683527092, "sim_compute_sim_state": 0.009838150693224622, "sim_compute_performance-ego0": 0.002190137838388418, "sim_compute_performance-npc0": 0.0022199587388472123}}
set_robot_commands_max0.002769236440782423
set_robot_commands_mean0.002769236440782423
set_robot_commands_median0.002769236440782423
set_robot_commands_min0.002769236440782423
sim_compute_performance-ego0_max0.002190137838388418
sim_compute_performance-ego0_mean0.002190137838388418
sim_compute_performance-ego0_median0.002190137838388418
sim_compute_performance-ego0_min0.002190137838388418
sim_compute_performance-npc0_max0.0022199587388472123
sim_compute_performance-npc0_mean0.0022199587388472123
sim_compute_performance-npc0_median0.0022199587388472123
sim_compute_performance-npc0_min0.0022199587388472123
sim_compute_sim_state_max0.009838150693224622
sim_compute_sim_state_mean0.009838150693224622
sim_compute_sim_state_median0.009838150693224622
sim_compute_sim_state_min0.009838150693224622
sim_render-ego0_max0.00427320096399877
sim_render-ego0_mean0.00427320096399877
sim_render-ego0_median0.00427320096399877
sim_render-ego0_min0.00427320096399877
sim_render-npc0_max0.004319195623521681
sim_render-npc0_mean0.004319195623521681
sim_render-npc0_median0.004319195623521681
sim_render-npc0_min0.004319195623521681
simulation-passed1
step_physics_max0.14739243086282308
step_physics_mean0.14739243086282308
step_physics_median0.14739243086282308
step_physics_min0.14739243086282308
survival_time_max7.649999999999981
survival_time_mean7.649999999999981
survival_time_min7.649999999999981
No reset possible
7412013587Andras Beres202-1aido-LFV-sim-validationsim-1of4successnogpu-production-spot-0-010:05:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.350000000000026
in-drivable-lane_median0.4499999999999984
driven_lanedir_consec_median4.611184263865431
deviation-center-line_median0.6415325786028636


other stats
agent_compute-ego0_max0.02121181027931079
agent_compute-ego0_mean0.02121181027931079
agent_compute-ego0_median0.02121181027931079
agent_compute-ego0_min0.02121181027931079
agent_compute-npc0_max0.059916552744413674
agent_compute-npc0_mean0.059916552744413674
agent_compute-npc0_median0.059916552744413674
agent_compute-npc0_min0.059916552744413674
agent_compute-npc1_max0.05877996745862459
agent_compute-npc1_mean0.05877996745862459
agent_compute-npc1_median0.05877996745862459
agent_compute-npc1_min0.05877996745862459
agent_compute-npc2_max0.052070721199637966
agent_compute-npc2_mean0.052070721199637966
agent_compute-npc2_median0.052070721199637966
agent_compute-npc2_min0.052070721199637966
agent_compute-npc3_max0.06000899536567822
agent_compute-npc3_mean0.06000899536567822
agent_compute-npc3_median0.06000899536567822
agent_compute-npc3_min0.06000899536567822
complete-iteration_max0.7648166899095502
complete-iteration_mean0.7648166899095502
complete-iteration_median0.7648166899095502
complete-iteration_min0.7648166899095502
deviation-center-line_max0.6415325786028636
deviation-center-line_mean0.6415325786028636
deviation-center-line_min0.6415325786028636
deviation-heading_max1.997710377930574
deviation-heading_mean1.997710377930574
deviation-heading_median1.997710377930574
deviation-heading_min1.997710377930574
distance-from-start_max2.582879970462437
distance-from-start_mean2.582879970462437
distance-from-start_median2.582879970462437
distance-from-start_min2.582879970462437
driven_any_max4.885535643936469
driven_any_mean4.885535643936469
driven_any_median4.885535643936469
driven_any_min4.885535643936469
driven_lanedir_consec_max4.611184263865431
driven_lanedir_consec_mean4.611184263865431
driven_lanedir_consec_min4.611184263865431
driven_lanedir_max4.611184263865431
driven_lanedir_mean4.611184263865431
driven_lanedir_median4.611184263865431
driven_lanedir_min4.611184263865431
get_duckie_state_max3.101532919365063e-06
get_duckie_state_mean3.101532919365063e-06
get_duckie_state_median3.101532919365063e-06
get_duckie_state_min3.101532919365063e-06
get_robot_state_max0.020202897096935072
get_robot_state_mean0.020202897096935072
get_robot_state_median0.020202897096935072
get_robot_state_min0.020202897096935072
get_state_dump_max0.012507849618008262
get_state_dump_mean0.012507849618008262
get_state_dump_median0.012507849618008262
get_state_dump_min0.012507849618008262
get_ui_image_max0.029041122971919544
get_ui_image_mean0.029041122971919544
get_ui_image_median0.029041122971919544
get_ui_image_min0.029041122971919544
in-drivable-lane_max0.4499999999999984
in-drivable-lane_mean0.4499999999999984
in-drivable-lane_min0.4499999999999984
per-episodes
details{"LFV-norm-zigzag-000-ego0": {"driven_any": 4.885535643936469, "get_ui_image": 0.029041122971919544, "step_physics": 0.3470724706064191, "survival_time": 11.350000000000026, "driven_lanedir": 4.611184263865431, "get_state_dump": 0.012507849618008262, "get_robot_state": 0.020202897096935072, "sim_render-ego0": 0.004415205696172882, "sim_render-npc0": 0.004503131958476284, "sim_render-npc1": 0.0046582671633937905, "sim_render-npc2": 0.004775419569852059, "sim_render-npc3": 0.00478142918201915, "get_duckie_state": 3.101532919365063e-06, "in-drivable-lane": 0.4499999999999984, "deviation-heading": 1.997710377930574, "agent_compute-ego0": 0.02121181027931079, "agent_compute-npc0": 0.059916552744413674, "agent_compute-npc1": 0.05877996745862459, "agent_compute-npc2": 0.052070721199637966, "agent_compute-npc3": 0.06000899536567822, "complete-iteration": 0.7648166899095502, "set_robot_commands": 0.0029364792924178275, "distance-from-start": 2.582879970462437, "deviation-center-line": 0.6415325786028636, "driven_lanedir_consec": 4.611184263865431, "sim_compute_sim_state": 0.05463409528397677, "sim_compute_performance-ego0": 0.002351065476735433, "sim_compute_performance-npc0": 0.0022662051937036346, "sim_compute_performance-npc1": 0.002327832213619299, "sim_compute_performance-npc2": 0.002404237002657171, "sim_compute_performance-npc3": 0.002363066924245734}}
set_robot_commands_max0.0029364792924178275
set_robot_commands_mean0.0029364792924178275
set_robot_commands_median0.0029364792924178275
set_robot_commands_min0.0029364792924178275
sim_compute_performance-ego0_max0.002351065476735433
sim_compute_performance-ego0_mean0.002351065476735433
sim_compute_performance-ego0_median0.002351065476735433
sim_compute_performance-ego0_min0.002351065476735433
sim_compute_performance-npc0_max0.0022662051937036346
sim_compute_performance-npc0_mean0.0022662051937036346
sim_compute_performance-npc0_median0.0022662051937036346
sim_compute_performance-npc0_min0.0022662051937036346
sim_compute_performance-npc1_max0.002327832213619299
sim_compute_performance-npc1_mean0.002327832213619299
sim_compute_performance-npc1_median0.002327832213619299
sim_compute_performance-npc1_min0.002327832213619299
sim_compute_performance-npc2_max0.002404237002657171
sim_compute_performance-npc2_mean0.002404237002657171
sim_compute_performance-npc2_median0.002404237002657171
sim_compute_performance-npc2_min0.002404237002657171
sim_compute_performance-npc3_max0.002363066924245734
sim_compute_performance-npc3_mean0.002363066924245734
sim_compute_performance-npc3_median0.002363066924245734
sim_compute_performance-npc3_min0.002363066924245734
sim_compute_sim_state_max0.05463409528397677
sim_compute_sim_state_mean0.05463409528397677
sim_compute_sim_state_median0.05463409528397677
sim_compute_sim_state_min0.05463409528397677
sim_render-ego0_max0.004415205696172882
sim_render-ego0_mean0.004415205696172882
sim_render-ego0_median0.004415205696172882
sim_render-ego0_min0.004415205696172882
sim_render-npc0_max0.004503131958476284
sim_render-npc0_mean0.004503131958476284
sim_render-npc0_median0.004503131958476284
sim_render-npc0_min0.004503131958476284
sim_render-npc1_max0.0046582671633937905
sim_render-npc1_mean0.0046582671633937905
sim_render-npc1_median0.0046582671633937905
sim_render-npc1_min0.0046582671633937905
sim_render-npc2_max0.004775419569852059
sim_render-npc2_mean0.004775419569852059
sim_render-npc2_median0.004775419569852059
sim_render-npc2_min0.004775419569852059
sim_render-npc3_max0.00478142918201915
sim_render-npc3_mean0.00478142918201915
sim_render-npc3_median0.00478142918201915
sim_render-npc3_min0.00478142918201915
simulation-passed1
step_physics_max0.3470724706064191
step_physics_mean0.3470724706064191
step_physics_median0.3470724706064191
step_physics_min0.3470724706064191
survival_time_max11.350000000000026
survival_time_mean11.350000000000026
survival_time_min11.350000000000026
No reset possible
7406913594Andras Beresfsf+ilaido-LF-sim-testingsim-0of4successnogpu-production-spot-0-010:08:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median30.273709968604788
survival_time_median59.99999999999873
deviation-center-line_median3.744156617914448
in-drivable-lane_median0.3499999999999801


other stats
agent_compute-ego0_max0.020954879892557288
agent_compute-ego0_mean0.020954879892557288
agent_compute-ego0_median0.020954879892557288
agent_compute-ego0_min0.020954879892557288
complete-iteration_max0.1753421679424505
complete-iteration_mean0.1753421679424505
complete-iteration_median0.1753421679424505
complete-iteration_min0.1753421679424505
deviation-center-line_max3.744156617914448
deviation-center-line_mean3.744156617914448
deviation-center-line_min3.744156617914448
deviation-heading_max7.519526861282561
deviation-heading_mean7.519526861282561
deviation-heading_median7.519526861282561
deviation-heading_min7.519526861282561
distance-from-start_max3.2459800808726573
distance-from-start_mean3.2459800808726573
distance-from-start_median3.2459800808726573
distance-from-start_min3.2459800808726573
driven_any_max30.797711914071908
driven_any_mean30.797711914071908
driven_any_median30.797711914071908
driven_any_min30.797711914071908
driven_lanedir_consec_max30.273709968604788
driven_lanedir_consec_mean30.273709968604788
driven_lanedir_consec_min30.273709968604788
driven_lanedir_max30.273709968604788
driven_lanedir_mean30.273709968604788
driven_lanedir_median30.273709968604788
driven_lanedir_min30.273709968604788
get_duckie_state_max1.4583038152207146e-06
get_duckie_state_mean1.4583038152207146e-06
get_duckie_state_median1.4583038152207146e-06
get_duckie_state_min1.4583038152207146e-06
get_robot_state_max0.00402945026966257
get_robot_state_mean0.00402945026966257
get_robot_state_median0.00402945026966257
get_robot_state_min0.00402945026966257
get_state_dump_max0.005060659260872897
get_state_dump_mean0.005060659260872897
get_state_dump_median0.005060659260872897
get_state_dump_min0.005060659260872897
get_ui_image_max0.019643907642285095
get_ui_image_mean0.019643907642285095
get_ui_image_median0.019643907642285095
get_ui_image_min0.019643907642285095
in-drivable-lane_max0.3499999999999801
in-drivable-lane_mean0.3499999999999801
in-drivable-lane_min0.3499999999999801
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 30.797711914071908, "get_ui_image": 0.019643907642285095, "step_physics": 0.10855250632534616, "survival_time": 59.99999999999873, "driven_lanedir": 30.273709968604788, "get_state_dump": 0.005060659260872897, "get_robot_state": 0.00402945026966257, "sim_render-ego0": 0.003937440351284513, "get_duckie_state": 1.4583038152207146e-06, "in-drivable-lane": 0.3499999999999801, "deviation-heading": 7.519526861282561, "agent_compute-ego0": 0.020954879892557288, "complete-iteration": 0.1753421679424505, "set_robot_commands": 0.00255871672713687, "distance-from-start": 3.2459800808726573, "deviation-center-line": 3.744156617914448, "driven_lanedir_consec": 30.273709968604788, "sim_compute_sim_state": 0.008368251524201837, "sim_compute_performance-ego0": 0.0021476771412642175}}
set_robot_commands_max0.00255871672713687
set_robot_commands_mean0.00255871672713687
set_robot_commands_median0.00255871672713687
set_robot_commands_min0.00255871672713687
sim_compute_performance-ego0_max0.0021476771412642175
sim_compute_performance-ego0_mean0.0021476771412642175
sim_compute_performance-ego0_median0.0021476771412642175
sim_compute_performance-ego0_min0.0021476771412642175
sim_compute_sim_state_max0.008368251524201837
sim_compute_sim_state_mean0.008368251524201837
sim_compute_sim_state_median0.008368251524201837
sim_compute_sim_state_min0.008368251524201837
sim_render-ego0_max0.003937440351284513
sim_render-ego0_mean0.003937440351284513
sim_render-ego0_median0.003937440351284513
sim_render-ego0_min0.003937440351284513
simulation-passed1
step_physics_max0.10855250632534616
step_physics_mean0.10855250632534616
step_physics_median0.10855250632534616
step_physics_min0.10855250632534616
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7363413624Raphael Jeanmobile-segmentation-pedestrianaido-LFV_multi-sim-validation403successyesgpu-production-spot-0-010:41:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median0.9499999999999764
driven_lanedir_consec_median22.79735146834171
deviation-center-line_median3.0275938120192025


other stats
agent_compute-ego0_max0.018263043015326788
agent_compute-ego0_mean0.018112369090558553
agent_compute-ego0_median0.01803703212817444
agent_compute-ego0_min0.01803703212817444
agent_compute-ego1_max0.019620680392135888
agent_compute-ego1_mean0.01949756773451848
agent_compute-ego1_median0.01943601140570978
agent_compute-ego1_min0.01943601140570978
agent_compute-ego2_max0.020097842919240883
agent_compute-ego2_mean0.020097842919240883
agent_compute-ego2_median0.020097842919240883
agent_compute-ego2_min0.020097842919240883
agent_compute-ego3_max0.02092578409117128
agent_compute-ego3_mean0.02092578409117128
agent_compute-ego3_median0.02092578409117128
agent_compute-ego3_min0.02092578409117128
complete-iteration_max0.636746625122083
complete-iteration_mean0.5276104378230434
complete-iteration_median0.636746625122083
complete-iteration_min0.309338063224964
deviation-center-line_max5.486443088156714
deviation-center-line_mean3.3604190032072694
deviation-center-line_min2.4975506247949086
deviation-heading_max13.474203861675988
deviation-heading_mean10.88007398607474
deviation-heading_median10.808017098824154
deviation-heading_min9.261321104078878
distance-from-start_max4.28003007985178
distance-from-start_mean2.691413691129087
distance-from-start_median2.921765265296164
distance-from-start_min1.1421666520953615
driven_any_max26.99965145805751
driven_any_mean22.79007370580297
driven_any_median24.17716493253692
driven_any_min17.159666068213134
driven_lanedir_consec_max26.268143556315728
driven_lanedir_consec_mean21.80484538354101
driven_lanedir_consec_min16.1983124641552
driven_lanedir_max26.268143556315728
driven_lanedir_mean21.80484538354101
driven_lanedir_median22.79735146834171
driven_lanedir_min16.1983124641552
get_duckie_state_max1.8481906506540773e-06
get_duckie_state_mean1.802068599687164e-06
get_duckie_state_median1.8481906506540773e-06
get_duckie_state_min1.709824497753337e-06
get_robot_state_max0.015026195360957137
get_robot_state_mean0.012701030244702866
get_robot_state_median0.015026195360957137
get_robot_state_min0.008050700012194326
get_state_dump_max0.009904227784829371
get_state_dump_mean0.008880379727374437
get_state_dump_median0.009904227784829371
get_state_dump_min0.006832683612464568
get_ui_image_max0.02426106070201661
get_ui_image_mean0.022662341710497465
get_ui_image_median0.02426106070201661
get_ui_image_min0.01946490372745917
in-drivable-lane_max2.5499999999999936
in-drivable-lane_mean1.0499999999999912
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-techtrack-000-ego0": {"driven_any": 22.004274181739767, "get_ui_image": 0.02426106070201661, "step_physics": 0.4385711426143344, "survival_time": 59.99999999999873, "driven_lanedir": 20.451550852416357, "get_state_dump": 0.009904227784829371, "get_robot_state": 0.015026195360957137, "sim_render-ego0": 0.0038549743226723903, "sim_render-ego1": 0.005031786393761933, "sim_render-ego2": 0.005011284579643103, "sim_render-ego3": 0.005214940697624721, "get_duckie_state": 1.8481906506540773e-06, "in-drivable-lane": 2.5499999999999936, "deviation-heading": 9.285306291871157, "agent_compute-ego0": 0.01803703212817444, "agent_compute-ego1": 0.01943601140570978, "agent_compute-ego2": 0.020097842919240883, "agent_compute-ego3": 0.02092578409117128, "complete-iteration": 0.636746625122083, "set_robot_commands": 0.002851159249019067, "distance-from-start": 4.28003007985178, "deviation-center-line": 3.4898232448538233, "driven_lanedir_consec": 20.451550852416357, "sim_compute_sim_state": 0.030804816531102723, "sim_compute_performance-ego0": 0.0021332356455324095, "sim_compute_performance-ego1": 0.002250510389659923, "sim_compute_performance-ego2": 0.0024508534621239505, "sim_compute_performance-ego3": 0.0025432381800668227}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 17.159666068213134, "get_ui_image": 0.02426106070201661, "step_physics": 0.4385711426143344, "survival_time": 59.99999999999873, "driven_lanedir": 16.604714181615496, "get_state_dump": 0.009904227784829371, "get_robot_state": 0.015026195360957137, "sim_render-ego0": 0.0038549743226723903, "sim_render-ego1": 0.005031786393761933, "sim_render-ego2": 0.005011284579643103, "sim_render-ego3": 0.005214940697624721, "get_duckie_state": 1.8481906506540773e-06, "in-drivable-lane": 0.25000000000000355, "deviation-heading": 13.474203861675988, "agent_compute-ego0": 0.01803703212817444, "agent_compute-ego1": 0.01943601140570978, "agent_compute-ego2": 0.020097842919240883, "agent_compute-ego3": 0.02092578409117128, "complete-iteration": 0.636746625122083, "set_robot_commands": 0.002851159249019067, "distance-from-start": 3.3028363034846735, "deviation-center-line": 2.918949062609556, "driven_lanedir_consec": 16.604714181615496, "sim_compute_sim_state": 0.030804816531102723, "sim_compute_performance-ego0": 0.0021332356455324095, "sim_compute_performance-ego1": 0.002250510389659923, "sim_compute_performance-ego2": 0.0024508534621239505, "sim_compute_performance-ego3": 0.0025432381800668227}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 26.350055683334073, "get_ui_image": 0.02426106070201661, "step_physics": 0.4385711426143344, "survival_time": 59.99999999999873, "driven_lanedir": 25.14315208426706, "get_state_dump": 0.009904227784829371, "get_robot_state": 0.015026195360957137, "sim_render-ego0": 0.0038549743226723903, "sim_render-ego1": 0.005031786393761933, "sim_render-ego2": 0.005011284579643103, "sim_render-ego3": 0.005214940697624721, "get_duckie_state": 1.8481906506540773e-06, "in-drivable-lane": 1.2999999999999543, "deviation-heading": 10.808815673046784, "agent_compute-ego0": 0.01803703212817444, "agent_compute-ego1": 0.01943601140570978, "agent_compute-ego2": 0.020097842919240883, "agent_compute-ego3": 0.02092578409117128, "complete-iteration": 0.636746625122083, "set_robot_commands": 0.002851159249019067, "distance-from-start": 3.0499072239726313, "deviation-center-line": 3.1362385614288493, "driven_lanedir_consec": 25.14315208426706, "sim_compute_sim_state": 0.030804816531102723, "sim_compute_performance-ego0": 0.0021332356455324095, "sim_compute_performance-ego1": 0.002250510389659923, "sim_compute_performance-ego2": 0.0024508534621239505, "sim_compute_performance-ego3": 0.0025432381800668227}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 17.270467770102535, "get_ui_image": 0.02426106070201661, "step_physics": 0.4385711426143344, "survival_time": 59.99999999999873, "driven_lanedir": 16.1983124641552, "get_state_dump": 0.009904227784829371, "get_robot_state": 0.015026195360957137, "sim_render-ego0": 0.0038549743226723903, "sim_render-ego1": 0.005031786393761933, "sim_render-ego2": 0.005011284579643103, "sim_render-ego3": 0.005214940697624721, "get_duckie_state": 1.8481906506540773e-06, "in-drivable-lane": 1.599999999999997, "deviation-heading": 10.807218524601524, "agent_compute-ego0": 0.01803703212817444, "agent_compute-ego1": 0.01943601140570978, "agent_compute-ego2": 0.020097842919240883, "agent_compute-ego3": 0.02092578409117128, "complete-iteration": 0.636746625122083, "set_robot_commands": 0.002851159249019067, "distance-from-start": 2.7936233066196965, "deviation-center-line": 5.486443088156714, "driven_lanedir_consec": 16.1983124641552, "sim_compute_sim_state": 0.030804816531102723, "sim_compute_performance-ego0": 0.0021332356455324095, "sim_compute_performance-ego1": 0.002250510389659923, "sim_compute_performance-ego2": 0.0024508534621239505, "sim_compute_performance-ego3": 0.0025432381800668227}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 26.956327073370787, "get_ui_image": 0.01946490372745917, "step_physics": 0.2085114907066987, "survival_time": 59.99999999999873, "driven_lanedir": 26.16319916247622, "get_state_dump": 0.006832683612464568, "get_robot_state": 0.008050700012194326, "sim_render-ego0": 0.003941185567698609, "sim_render-ego1": 0.004924539523160428, "get_duckie_state": 1.709824497753337e-06, "in-drivable-lane": 0.5999999999999983, "deviation-heading": 9.261321104078878, "agent_compute-ego0": 0.018263043015326788, "agent_compute-ego1": 0.019620680392135888, "complete-iteration": 0.309338063224964, "set_robot_commands": 0.0027823037251544732, "distance-from-start": 1.1421666520953615, "deviation-center-line": 2.4975506247949086, "driven_lanedir_consec": 26.16319916247622, "sim_compute_sim_state": 0.009697033503371214, "sim_compute_performance-ego0": 0.0021720518180472367, "sim_compute_performance-ego1": 0.002336045486742412}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 26.99965145805751, "get_ui_image": 0.01946490372745917, "step_physics": 0.2085114907066987, "survival_time": 59.99999999999873, "driven_lanedir": 26.268143556315728, "get_state_dump": 0.006832683612464568, "get_robot_state": 0.008050700012194326, "sim_render-ego0": 0.003941185567698609, "sim_render-ego1": 0.004924539523160428, "get_duckie_state": 1.709824497753337e-06, "in-drivable-lane": 0.0, "deviation-heading": 11.643578461174108, "agent_compute-ego0": 0.018263043015326788, "agent_compute-ego1": 0.019620680392135888, "complete-iteration": 0.309338063224964, "set_robot_commands": 0.0027823037251544732, "distance-from-start": 1.5799185807503815, "deviation-center-line": 2.633509437399763, "driven_lanedir_consec": 26.268143556315728, "sim_compute_sim_state": 0.009697033503371214, "sim_compute_performance-ego0": 0.0021720518180472367, "sim_compute_performance-ego1": 0.002336045486742412}}
set_robot_commands_max0.002851159249019067
set_robot_commands_mean0.002828207407730869
set_robot_commands_median0.002851159249019067
set_robot_commands_min0.0027823037251544732
sim_compute_performance-ego0_max0.0021720518180472367
sim_compute_performance-ego0_mean0.0021461743697040185
sim_compute_performance-ego0_median0.0021332356455324095
sim_compute_performance-ego0_min0.0021332356455324095
sim_compute_performance-ego1_max0.002336045486742412
sim_compute_performance-ego1_mean0.0022790220886874193
sim_compute_performance-ego1_median0.002250510389659923
sim_compute_performance-ego1_min0.002250510389659923
sim_compute_performance-ego2_max0.0024508534621239505
sim_compute_performance-ego2_mean0.0024508534621239505
sim_compute_performance-ego2_median0.0024508534621239505
sim_compute_performance-ego2_min0.0024508534621239505
sim_compute_performance-ego3_max0.0025432381800668227
sim_compute_performance-ego3_mean0.0025432381800668227
sim_compute_performance-ego3_median0.0025432381800668227
sim_compute_performance-ego3_min0.0025432381800668227
sim_compute_sim_state_max0.030804816531102723
sim_compute_sim_state_mean0.02376888885519222
sim_compute_sim_state_median0.030804816531102723
sim_compute_sim_state_min0.009697033503371214
sim_render-ego0_max0.003941185567698609
sim_render-ego0_mean0.0038837114043477968
sim_render-ego0_median0.0038549743226723903
sim_render-ego0_min0.0038549743226723903
sim_render-ego1_max0.005031786393761933
sim_render-ego1_mean0.004996037436894764
sim_render-ego1_median0.005031786393761933
sim_render-ego1_min0.004924539523160428
sim_render-ego2_max0.005011284579643103
sim_render-ego2_mean0.005011284579643103
sim_render-ego2_median0.005011284579643103
sim_render-ego2_min0.005011284579643103
sim_render-ego3_max0.005214940697624721
sim_render-ego3_mean0.005214940697624721
sim_render-ego3_median0.005214940697624721
sim_render-ego3_min0.005214940697624721
simulation-passed1
step_physics_max0.4385711426143344
step_physics_mean0.36188459197845585
step_physics_median0.4385711426143344
step_physics_min0.2085114907066987
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7359613601Andras Beresfsf+ilaido-LFI-sim-validationsim-1of4successnogpu-production-spot-0-010:03:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median10.750000000000018
in-drivable-lane_median1.3000000000000131
driven_lanedir_consec_median4.072893815377444
deviation-center-line_median0.6629238216037172


other stats
agent_compute-ego0_max0.020712823779494675
agent_compute-ego0_mean0.020712823779494675
agent_compute-ego0_median0.020712823779494675
agent_compute-ego0_min0.020712823779494675
complete-iteration_max0.23000576540275855
complete-iteration_mean0.23000576540275855
complete-iteration_median0.23000576540275855
complete-iteration_min0.23000576540275855
deviation-center-line_max0.6629238216037172
deviation-center-line_mean0.6629238216037172
deviation-center-line_min0.6629238216037172
deviation-heading_max1.9263025388133528
deviation-heading_mean1.9263025388133528
deviation-heading_median1.9263025388133528
deviation-heading_min1.9263025388133528
distance-from-start_max2.2242714935233288
distance-from-start_mean2.2242714935233288
distance-from-start_median2.2242714935233288
distance-from-start_min2.2242714935233288
driven_any_max4.576729805083082
driven_any_mean4.576729805083082
driven_any_median4.576729805083082
driven_any_min4.576729805083082
driven_lanedir_consec_max4.072893815377444
driven_lanedir_consec_mean4.072893815377444
driven_lanedir_consec_min4.072893815377444
driven_lanedir_max4.222672239730186
driven_lanedir_mean4.222672239730186
driven_lanedir_median4.222672239730186
driven_lanedir_min4.222672239730186
get_duckie_state_max1.5419942361337167e-06
get_duckie_state_mean1.5419942361337167e-06
get_duckie_state_median1.5419942361337167e-06
get_duckie_state_min1.5419942361337167e-06
get_robot_state_max0.0040860529299135565
get_robot_state_mean0.0040860529299135565
get_robot_state_median0.0040860529299135565
get_robot_state_min0.0040860529299135565
get_state_dump_max0.005088686943054199
get_state_dump_mean0.005088686943054199
get_state_dump_median0.005088686943054199
get_state_dump_min0.005088686943054199
get_ui_image_max0.02621922669587312
get_ui_image_mean0.02621922669587312
get_ui_image_median0.02621922669587312
get_ui_image_min0.02621922669587312
in-drivable-lane_max1.3000000000000131
in-drivable-lane_mean1.3000000000000131
in-drivable-lane_min1.3000000000000131
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 4.576729805083082, "get_ui_image": 0.02621922669587312, "step_physics": 0.1526958147684733, "survival_time": 10.750000000000018, "driven_lanedir": 4.222672239730186, "get_state_dump": 0.005088686943054199, "get_robot_state": 0.0040860529299135565, "sim_render-ego0": 0.004055582814746433, "get_duckie_state": 1.5419942361337167e-06, "in-drivable-lane": 1.3000000000000131, "deviation-heading": 1.9263025388133528, "agent_compute-ego0": 0.020712823779494675, "complete-iteration": 0.23000576540275855, "set_robot_commands": 0.0025048719512091745, "distance-from-start": 2.2242714935233288, "deviation-center-line": 0.6629238216037172, "driven_lanedir_consec": 4.072893815377444, "sim_compute_sim_state": 0.01229614460909808, "sim_compute_performance-ego0": 0.0022422229802166976}}
set_robot_commands_max0.0025048719512091745
set_robot_commands_mean0.0025048719512091745
set_robot_commands_median0.0025048719512091745
set_robot_commands_min0.0025048719512091745
sim_compute_performance-ego0_max0.0022422229802166976
sim_compute_performance-ego0_mean0.0022422229802166976
sim_compute_performance-ego0_median0.0022422229802166976
sim_compute_performance-ego0_min0.0022422229802166976
sim_compute_sim_state_max0.01229614460909808
sim_compute_sim_state_mean0.01229614460909808
sim_compute_sim_state_median0.01229614460909808
sim_compute_sim_state_min0.01229614460909808
sim_render-ego0_max0.004055582814746433
sim_render-ego0_mean0.004055582814746433
sim_render-ego0_median0.004055582814746433
sim_render-ego0_min0.004055582814746433
simulation-passed1
step_physics_max0.1526958147684733
step_physics_mean0.1526958147684733
step_physics_median0.1526958147684733
step_physics_min0.1526958147684733
survival_time_max10.750000000000018
survival_time_mean10.750000000000018
survival_time_min10.750000000000018
No reset possible
7352613633Raphael Jeanmobile-segmentationaido-LFV-sim-testingsim-0of4successnogpu-production-spot-0-010:06:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median28.850000000000275
in-drivable-lane_median0.0
driven_lanedir_consec_median12.233431913030422
deviation-center-line_median1.5906614412351856


other stats
agent_compute-ego0_max0.019042470463419454
agent_compute-ego0_mean0.019042470463419454
agent_compute-ego0_median0.019042470463419454
agent_compute-ego0_min0.019042470463419454
agent_compute-npc0_max0.029656766607687134
agent_compute-npc0_mean0.029656766607687134
agent_compute-npc0_median0.029656766607687134
agent_compute-npc0_min0.029656766607687134
complete-iteration_max0.28949703229752377
complete-iteration_mean0.28949703229752377
complete-iteration_median0.28949703229752377
complete-iteration_min0.28949703229752377
deviation-center-line_max1.5906614412351856
deviation-center-line_mean1.5906614412351856
deviation-center-line_min1.5906614412351856
deviation-heading_max6.446719165876482
deviation-heading_mean6.446719165876482
deviation-heading_median6.446719165876482
deviation-heading_min6.446719165876482
distance-from-start_max1.5953076707600025
distance-from-start_mean1.5953076707600025
distance-from-start_median1.5953076707600025
distance-from-start_min1.5953076707600025
driven_any_max12.611251969557722
driven_any_mean12.611251969557722
driven_any_median12.611251969557722
driven_any_min12.611251969557722
driven_lanedir_consec_max12.233431913030422
driven_lanedir_consec_mean12.233431913030422
driven_lanedir_consec_min12.233431913030422
driven_lanedir_max12.233431913030422
driven_lanedir_mean12.233431913030422
driven_lanedir_median12.233431913030422
driven_lanedir_min12.233431913030422
get_duckie_state_max2.417184902309959e-06
get_duckie_state_mean2.417184902309959e-06
get_duckie_state_median2.417184902309959e-06
get_duckie_state_min2.417184902309959e-06
get_robot_state_max0.007992885517001565
get_robot_state_mean0.007992885517001565
get_robot_state_median0.007992885517001565
get_robot_state_min0.007992885517001565
get_state_dump_max0.006803989822889282
get_state_dump_mean0.006803989822889282
get_state_dump_median0.006803989822889282
get_state_dump_min0.006803989822889282
get_ui_image_max0.024366663814003493
get_ui_image_mean0.024366663814003493
get_ui_image_median0.024366663814003493
get_ui_image_min0.024366663814003493
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-small_loop-000-ego0": {"driven_any": 12.611251969557722, "get_ui_image": 0.024366663814003493, "step_physics": 0.17311136656566475, "survival_time": 28.850000000000275, "driven_lanedir": 12.233431913030422, "get_state_dump": 0.006803989822889282, "get_robot_state": 0.007992885517001565, "sim_render-ego0": 0.003947159410760477, "sim_render-npc0": 0.0050049005495223205, "get_duckie_state": 2.417184902309959e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.446719165876482, "agent_compute-ego0": 0.019042470463419454, "agent_compute-npc0": 0.029656766607687134, "complete-iteration": 0.28949703229752377, "set_robot_commands": 0.0028001152520361243, "distance-from-start": 1.5953076707600025, "deviation-center-line": 1.5906614412351856, "driven_lanedir_consec": 12.233431913030422, "sim_compute_sim_state": 0.009432680054106926, "sim_compute_performance-ego0": 0.0021771229674659386, "sim_compute_performance-npc0": 0.002368007564214687}}
set_robot_commands_max0.0028001152520361243
set_robot_commands_mean0.0028001152520361243
set_robot_commands_median0.0028001152520361243
set_robot_commands_min0.0028001152520361243
sim_compute_performance-ego0_max0.0021771229674659386
sim_compute_performance-ego0_mean0.0021771229674659386
sim_compute_performance-ego0_median0.0021771229674659386
sim_compute_performance-ego0_min0.0021771229674659386
sim_compute_performance-npc0_max0.002368007564214687
sim_compute_performance-npc0_mean0.002368007564214687
sim_compute_performance-npc0_median0.002368007564214687
sim_compute_performance-npc0_min0.002368007564214687
sim_compute_sim_state_max0.009432680054106926
sim_compute_sim_state_mean0.009432680054106926
sim_compute_sim_state_median0.009432680054106926
sim_compute_sim_state_min0.009432680054106926
sim_render-ego0_max0.003947159410760477
sim_render-ego0_mean0.003947159410760477
sim_render-ego0_median0.003947159410760477
sim_render-ego0_min0.003947159410760477
sim_render-npc0_max0.0050049005495223205
sim_render-npc0_mean0.0050049005495223205
sim_render-npc0_median0.0050049005495223205
sim_render-npc0_min0.0050049005495223205
simulation-passed1
step_physics_max0.17311136656566475
step_physics_mean0.17311136656566475
step_physics_median0.17311136656566475
step_physics_min0.17311136656566475
survival_time_max28.850000000000275
survival_time_mean28.850000000000275
survival_time_min28.850000000000275
No reset possible
7350313602Andras Beresfsf+ilaido-LFP-sim-validationsim-1of4successnogpu-production-spot-0-010:01:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.049999999999997
in-drivable-lane_median0.0
driven_lanedir_consec_median0.9145954590897052
deviation-center-line_median0.1968288120670777


other stats
agent_compute-ego0_max0.0210220544568954
agent_compute-ego0_mean0.0210220544568954
agent_compute-ego0_median0.0210220544568954
agent_compute-ego0_min0.0210220544568954
complete-iteration_max0.16502873359187956
complete-iteration_mean0.16502873359187956
complete-iteration_median0.16502873359187956
complete-iteration_min0.16502873359187956
deviation-center-line_max0.1968288120670777
deviation-center-line_mean0.1968288120670777
deviation-center-line_min0.1968288120670777
deviation-heading_max0.7901111680866758
deviation-heading_mean0.7901111680866758
deviation-heading_median0.7901111680866758
deviation-heading_min0.7901111680866758
distance-from-start_max0.7130703825302144
distance-from-start_mean0.7130703825302144
distance-from-start_median0.7130703825302144
distance-from-start_min0.7130703825302144
driven_any_max0.9421010653779566
driven_any_mean0.9421010653779566
driven_any_median0.9421010653779566
driven_any_min0.9421010653779566
driven_lanedir_consec_max0.9145954590897052
driven_lanedir_consec_mean0.9145954590897052
driven_lanedir_consec_min0.9145954590897052
driven_lanedir_max0.9145954590897052
driven_lanedir_mean0.9145954590897052
driven_lanedir_median0.9145954590897052
driven_lanedir_min0.9145954590897052
get_duckie_state_max0.004801184900345341
get_duckie_state_mean0.004801184900345341
get_duckie_state_median0.004801184900345341
get_duckie_state_min0.004801184900345341
get_robot_state_max0.004099630540417087
get_robot_state_mean0.004099630540417087
get_robot_state_median0.004099630540417087
get_robot_state_min0.004099630540417087
get_state_dump_max0.006036135458177136
get_state_dump_mean0.006036135458177136
get_state_dump_median0.006036135458177136
get_state_dump_min0.006036135458177136
get_ui_image_max0.018441219483652425
get_ui_image_mean0.018441219483652425
get_ui_image_median0.018441219483652425
get_ui_image_min0.018441219483652425
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-small_loop-000-ego0": {"driven_any": 0.9421010653779566, "get_ui_image": 0.018441219483652425, "step_physics": 0.09559075678548504, "survival_time": 3.049999999999997, "driven_lanedir": 0.9145954590897052, "get_state_dump": 0.006036135458177136, "get_robot_state": 0.004099630540417087, "sim_render-ego0": 0.004184734436773484, "get_duckie_state": 0.004801184900345341, "in-drivable-lane": 0.0, "deviation-heading": 0.7901111680866758, "agent_compute-ego0": 0.0210220544568954, "complete-iteration": 0.16502873359187956, "set_robot_commands": 0.0026696189757316343, "distance-from-start": 0.7130703825302144, "deviation-center-line": 0.1968288120670777, "driven_lanedir_consec": 0.9145954590897052, "sim_compute_sim_state": 0.00588224011082803, "sim_compute_performance-ego0": 0.0021955120948053174}}
set_robot_commands_max0.0026696189757316343
set_robot_commands_mean0.0026696189757316343
set_robot_commands_median0.0026696189757316343
set_robot_commands_min0.0026696189757316343
sim_compute_performance-ego0_max0.0021955120948053174
sim_compute_performance-ego0_mean0.0021955120948053174
sim_compute_performance-ego0_median0.0021955120948053174
sim_compute_performance-ego0_min0.0021955120948053174
sim_compute_sim_state_max0.00588224011082803
sim_compute_sim_state_mean0.00588224011082803
sim_compute_sim_state_median0.00588224011082803
sim_compute_sim_state_min0.00588224011082803
sim_render-ego0_max0.004184734436773484
sim_render-ego0_mean0.004184734436773484
sim_render-ego0_median0.004184734436773484
sim_render-ego0_min0.004184734436773484
simulation-passed1
step_physics_max0.09559075678548504
step_physics_mean0.09559075678548504
step_physics_median0.09559075678548504
step_physics_min0.09559075678548504
survival_time_max3.049999999999997
survival_time_mean3.049999999999997
survival_time_min3.049999999999997
No reset possible
7345313614Raphael Jeanmobile-segmentation-pedestrianaido-LFI-sim-testingsim-1of4successnogpu-production-spot-0-010:03:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.00000000000002
in-drivable-lane_median0.5499999999999998
driven_lanedir_consec_median3.6842180164673497
deviation-center-line_median0.8616794084875568


other stats
agent_compute-ego0_max0.016619255100440117
agent_compute-ego0_mean0.016619255100440117
agent_compute-ego0_median0.016619255100440117
agent_compute-ego0_min0.016619255100440117
complete-iteration_max0.2382037251243764
complete-iteration_mean0.2382037251243764
complete-iteration_median0.2382037251243764
complete-iteration_min0.2382037251243764
deviation-center-line_max0.8616794084875568
deviation-center-line_mean0.8616794084875568
deviation-center-line_min0.8616794084875568
deviation-heading_max3.3821336247914613
deviation-heading_mean3.3821336247914613
deviation-heading_median3.3821336247914613
deviation-heading_min3.3821336247914613
distance-from-start_max2.299262942515527
distance-from-start_mean2.299262942515527
distance-from-start_median2.299262942515527
distance-from-start_min2.299262942515527
driven_any_max4.335534309169334
driven_any_mean4.335534309169334
driven_any_median4.335534309169334
driven_any_min4.335534309169334
driven_lanedir_consec_max3.6842180164673497
driven_lanedir_consec_mean3.6842180164673497
driven_lanedir_consec_min3.6842180164673497
driven_lanedir_max3.8387393382053423
driven_lanedir_mean3.8387393382053423
driven_lanedir_median3.8387393382053423
driven_lanedir_min3.8387393382053423
get_duckie_state_max1.4110927668092478e-06
get_duckie_state_mean1.4110927668092478e-06
get_duckie_state_median1.4110927668092478e-06
get_duckie_state_min1.4110927668092478e-06
get_robot_state_max0.0037442656124339383
get_robot_state_mean0.0037442656124339383
get_robot_state_median0.0037442656124339383
get_robot_state_min0.0037442656124339383
get_state_dump_max0.004854698526373815
get_state_dump_mean0.004854698526373815
get_state_dump_median0.004854698526373815
get_state_dump_min0.004854698526373815
get_ui_image_max0.03251741590543031
get_ui_image_mean0.03251741590543031
get_ui_image_median0.03251741590543031
get_ui_image_min0.03251741590543031
in-drivable-lane_max0.5499999999999998
in-drivable-lane_mean0.5499999999999998
in-drivable-lane_min0.5499999999999998
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 4.335534309169334, "get_ui_image": 0.03251741590543031, "step_physics": 0.16080767968121698, "survival_time": 11.00000000000002, "driven_lanedir": 3.8387393382053423, "get_state_dump": 0.004854698526373815, "get_robot_state": 0.0037442656124339383, "sim_render-ego0": 0.0037148462701167458, "get_duckie_state": 1.4110927668092478e-06, "in-drivable-lane": 0.5499999999999998, "deviation-heading": 3.3821336247914613, "agent_compute-ego0": 0.016619255100440117, "complete-iteration": 0.2382037251243764, "set_robot_commands": 0.0023285555084366603, "distance-from-start": 2.299262942515527, "deviation-center-line": 0.8616794084875568, "driven_lanedir_consec": 3.6842180164673497, "sim_compute_sim_state": 0.011538444061624518, "sim_compute_performance-ego0": 0.001985918882206015}}
set_robot_commands_max0.0023285555084366603
set_robot_commands_mean0.0023285555084366603
set_robot_commands_median0.0023285555084366603
set_robot_commands_min0.0023285555084366603
sim_compute_performance-ego0_max0.001985918882206015
sim_compute_performance-ego0_mean0.001985918882206015
sim_compute_performance-ego0_median0.001985918882206015
sim_compute_performance-ego0_min0.001985918882206015
sim_compute_sim_state_max0.011538444061624518
sim_compute_sim_state_mean0.011538444061624518
sim_compute_sim_state_median0.011538444061624518
sim_compute_sim_state_min0.011538444061624518
sim_render-ego0_max0.0037148462701167458
sim_render-ego0_mean0.0037148462701167458
sim_render-ego0_median0.0037148462701167458
sim_render-ego0_min0.0037148462701167458
simulation-passed1
step_physics_max0.16080767968121698
step_physics_mean0.16080767968121698
step_physics_median0.16080767968121698
step_physics_min0.16080767968121698
survival_time_max11.00000000000002
survival_time_mean11.00000000000002
survival_time_min11.00000000000002
No reset possible
7342413630Raphael Jeanmobile-segmentationaido-LFI-sim-validationsim-0of4successnogpu-production-spot-0-010:02:10
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.2999999999999963
in-drivable-lane_median0.34999999999999876
driven_lanedir_consec_median0.7536911996150322
deviation-center-line_median0.16458116645954238


other stats
agent_compute-ego0_max0.017386578801852553
agent_compute-ego0_mean0.017386578801852553
agent_compute-ego0_median0.017386578801852553
agent_compute-ego0_min0.017386578801852553
complete-iteration_max0.21304545117847956
complete-iteration_mean0.21304545117847956
complete-iteration_median0.21304545117847956
complete-iteration_min0.21304545117847956
deviation-center-line_max0.16458116645954238
deviation-center-line_mean0.16458116645954238
deviation-center-line_min0.16458116645954238
deviation-heading_max1.5289688358860638
deviation-heading_mean1.5289688358860638
deviation-heading_median1.5289688358860638
deviation-heading_min1.5289688358860638
distance-from-start_max0.9838641998822314
distance-from-start_mean0.9838641998822314
distance-from-start_median0.9838641998822314
distance-from-start_min0.9838641998822314
driven_any_max1.0118490855546596
driven_any_mean1.0118490855546596
driven_any_median1.0118490855546596
driven_any_min1.0118490855546596
driven_lanedir_consec_max0.7536911996150322
driven_lanedir_consec_mean0.7536911996150322
driven_lanedir_consec_min0.7536911996150322
driven_lanedir_max0.7637288853908375
driven_lanedir_mean0.7637288853908375
driven_lanedir_median0.7637288853908375
driven_lanedir_min0.7637288853908375
get_duckie_state_max1.5123566584800606e-06
get_duckie_state_mean1.5123566584800606e-06
get_duckie_state_median1.5123566584800606e-06
get_duckie_state_min1.5123566584800606e-06
get_robot_state_max0.0037648962504828153
get_robot_state_mean0.0037648962504828153
get_robot_state_median0.0037648962504828153
get_robot_state_min0.0037648962504828153
get_state_dump_max0.004956704467090208
get_state_dump_mean0.004956704467090208
get_state_dump_median0.004956704467090208
get_state_dump_min0.004956704467090208
get_ui_image_max0.03178442414127179
get_ui_image_mean0.03178442414127179
get_ui_image_median0.03178442414127179
get_ui_image_min0.03178442414127179
in-drivable-lane_max0.34999999999999876
in-drivable-lane_mean0.34999999999999876
in-drivable-lane_min0.34999999999999876
per-episodes
details{"LFI-norm-4way-000-ego0": {"driven_any": 1.0118490855546596, "get_ui_image": 0.03178442414127179, "step_physics": 0.13706084151766193, "survival_time": 3.2999999999999963, "driven_lanedir": 0.7637288853908375, "get_state_dump": 0.004956704467090208, "get_robot_state": 0.0037648962504828153, "sim_render-ego0": 0.0038165156521014314, "get_duckie_state": 1.5123566584800606e-06, "in-drivable-lane": 0.34999999999999876, "deviation-heading": 1.5289688358860638, "agent_compute-ego0": 0.017386578801852553, "complete-iteration": 0.21304545117847956, "set_robot_commands": 0.002584855947921525, "distance-from-start": 0.9838641998822314, "deviation-center-line": 0.16458116645954238, "driven_lanedir_consec": 0.7536911996150322, "sim_compute_sim_state": 0.009533608137671628, "sim_compute_performance-ego0": 0.002058840509670884}}
set_robot_commands_max0.002584855947921525
set_robot_commands_mean0.002584855947921525
set_robot_commands_median0.002584855947921525
set_robot_commands_min0.002584855947921525
sim_compute_performance-ego0_max0.002058840509670884
sim_compute_performance-ego0_mean0.002058840509670884
sim_compute_performance-ego0_median0.002058840509670884
sim_compute_performance-ego0_min0.002058840509670884
sim_compute_sim_state_max0.009533608137671628
sim_compute_sim_state_mean0.009533608137671628
sim_compute_sim_state_median0.009533608137671628
sim_compute_sim_state_min0.009533608137671628
sim_render-ego0_max0.0038165156521014314
sim_render-ego0_mean0.0038165156521014314
sim_render-ego0_median0.0038165156521014314
sim_render-ego0_min0.0038165156521014314
simulation-passed1
step_physics_max0.13706084151766193
step_physics_mean0.13706084151766193
step_physics_median0.13706084151766193
step_physics_min0.13706084151766193
survival_time_max3.2999999999999963
survival_time_mean3.2999999999999963
survival_time_min3.2999999999999963
No reset possible
7305513652Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFVI_multi-sim-validationsim-0of4successnogpu-production-spot-0-010:29:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median27.79999999999936
driven_lanedir_consec_median12.555386232822231
deviation-center-line_median1.9676852704320784


other stats
agent_compute-ego0_max0.01878313021695584
agent_compute-ego0_mean0.01878313021695584
agent_compute-ego0_median0.01878313021695584
agent_compute-ego0_min0.01878313021695584
agent_compute-ego1_max0.019627698752207125
agent_compute-ego1_mean0.019627698752207125
agent_compute-ego1_median0.019627698752207125
agent_compute-ego1_min0.019627698752207125
agent_compute-ego2_max0.02145305898763258
agent_compute-ego2_mean0.02145305898763258
agent_compute-ego2_median0.02145305898763258
agent_compute-ego2_min0.02145305898763258
agent_compute-ego3_max0.021692354216563712
agent_compute-ego3_mean0.021692354216563712
agent_compute-ego3_median0.021692354216563712
agent_compute-ego3_min0.021692354216563712
complete-iteration_max0.6095638068689097
complete-iteration_mean0.6095638068689097
complete-iteration_median0.6095638068689097
complete-iteration_min0.6095638068689097
deviation-center-line_max3.943637776563304
deviation-center-line_mean2.031974181184645
deviation-center-line_min0.24888840731111872
deviation-heading_max13.143515006396434
deviation-heading_mean6.9100055708285915
deviation-heading_median6.73748783202218
deviation-heading_min1.0215316128735694
distance-from-start_max3.212403033012665
distance-from-start_mean2.574009251360802
distance-from-start_median2.704136776920036
distance-from-start_min1.675360418590472
driven_any_max27.073709882593203
driven_any_mean14.927577993133152
driven_any_median15.090958458695876
driven_any_min2.4546851725476535
driven_lanedir_consec_max23.85919457061423
driven_lanedir_consec_mean12.651752609593338
driven_lanedir_consec_min1.637043402114657
driven_lanedir_max25.1758366532162
driven_lanedir_mean13.606573366345202
driven_lanedir_median13.806706705024975
driven_lanedir_min1.637043402114657
get_duckie_state_max1.5539888736111834e-06
get_duckie_state_mean1.5539888736111834e-06
get_duckie_state_median1.5539888736111834e-06
get_duckie_state_min1.5539888736111834e-06
get_robot_state_max0.015846900598492652
get_robot_state_mean0.015846900598492652
get_robot_state_median0.015846900598492652
get_robot_state_min0.015846900598492652
get_state_dump_max0.0104248936626933
get_state_dump_mean0.0104248936626933
get_state_dump_median0.0104248936626933
get_state_dump_min0.0104248936626933
get_ui_image_max0.0312231829720274
get_ui_image_mean0.0312231829720274
get_ui_image_median0.0312231829720274
get_ui_image_min0.0312231829720274
in-drivable-lane_max55.749999999998735
in-drivable-lane_mean28.349999999999355
in-drivable-lane_min2.0499999999999723
per-episodes
details{"LFVI_multi-norm-udem1-000-ego0": {"driven_any": 26.878773041843292, "get_ui_image": 0.0312231829720274, "step_physics": 0.4057667231579606, "survival_time": 59.99999999999873, "driven_lanedir": 24.927593420400477, "get_state_dump": 0.0104248936626933, "get_robot_state": 0.015846900598492652, "sim_render-ego0": 0.003982898297655295, "sim_render-ego1": 0.004863042617022842, "sim_render-ego2": 0.005169429151739903, "sim_render-ego3": 0.005240283937478046, "get_duckie_state": 1.5539888736111834e-06, "in-drivable-lane": 2.0499999999999723, "deviation-heading": 13.143515006396434, "agent_compute-ego0": 0.01878313021695584, "agent_compute-ego1": 0.019627698752207125, "agent_compute-ego2": 0.02145305898763258, "agent_compute-ego3": 0.021692354216563712, "complete-iteration": 0.6095638068689097, "set_robot_commands": 0.002888337063054856, "distance-from-start": 3.1708987383121534, "deviation-center-line": 3.5900361461088393, "driven_lanedir_consec": 22.46384664504979, "sim_compute_sim_state": 0.023954577886691003, "sim_compute_performance-ego0": 0.0021937566434811, "sim_compute_performance-ego1": 0.0022880703483791178, "sim_compute_performance-ego2": 0.0027021224254573217, "sim_compute_performance-ego3": 0.002645371855545997}, "LFVI_multi-norm-udem1-000-ego1": {"driven_any": 2.4546851725476535, "get_ui_image": 0.0312231829720274, "step_physics": 0.4057667231579606, "survival_time": 59.99999999999873, "driven_lanedir": 1.637043402114657, "get_state_dump": 0.0104248936626933, "get_robot_state": 0.015846900598492652, "sim_render-ego0": 0.003982898297655295, "sim_render-ego1": 0.004863042617022842, "sim_render-ego2": 0.005169429151739903, "sim_render-ego3": 0.005240283937478046, "get_duckie_state": 1.5539888736111834e-06, "in-drivable-lane": 55.749999999998735, "deviation-heading": 1.0215316128735694, "agent_compute-ego0": 0.01878313021695584, "agent_compute-ego1": 0.019627698752207125, "agent_compute-ego2": 0.02145305898763258, "agent_compute-ego3": 0.021692354216563712, "complete-iteration": 0.6095638068689097, "set_robot_commands": 0.002888337063054856, "distance-from-start": 2.237374815527918, "deviation-center-line": 0.24888840731111872, "driven_lanedir_consec": 1.637043402114657, "sim_compute_sim_state": 0.023954577886691003, "sim_compute_performance-ego0": 0.0021937566434811, "sim_compute_performance-ego1": 0.0022880703483791178, "sim_compute_performance-ego2": 0.0027021224254573217, "sim_compute_performance-ego3": 0.002645371855545997}, "LFVI_multi-norm-udem1-000-ego2": {"driven_any": 3.3031438755484563, "get_ui_image": 0.0312231829720274, "step_physics": 0.4057667231579606, "survival_time": 59.99999999999873, "driven_lanedir": 2.685819989649472, "get_state_dump": 0.0104248936626933, "get_robot_state": 0.015846900598492652, "sim_render-ego0": 0.003982898297655295, "sim_render-ego1": 0.004863042617022842, "sim_render-ego2": 0.005169429151739903, "sim_render-ego3": 0.005240283937478046, "get_duckie_state": 1.5539888736111834e-06, "in-drivable-lane": 53.24999999999875, "deviation-heading": 1.4821104016899092, "agent_compute-ego0": 0.01878313021695584, "agent_compute-ego1": 0.019627698752207125, "agent_compute-ego2": 0.02145305898763258, "agent_compute-ego3": 0.021692354216563712, "complete-iteration": 0.6095638068689097, "set_robot_commands": 0.002888337063054856, "distance-from-start": 1.675360418590472, "deviation-center-line": 0.3453343947553178, "driven_lanedir_consec": 2.6469258205946753, "sim_compute_sim_state": 0.023954577886691003, "sim_compute_performance-ego0": 0.0021937566434811, "sim_compute_performance-ego1": 0.0022880703483791178, "sim_compute_performance-ego2": 0.0027021224254573217, "sim_compute_performance-ego3": 0.002645371855545997}, "LFVI_multi-norm-udem1-000-ego3": {"driven_any": 27.073709882593203, "get_ui_image": 0.0312231829720274, "step_physics": 0.4057667231579606, "survival_time": 59.99999999999873, "driven_lanedir": 25.1758366532162, "get_state_dump": 0.0104248936626933, "get_robot_state": 0.015846900598492652, "sim_render-ego0": 0.003982898297655295, "sim_render-ego1": 0.004863042617022842, "sim_render-ego2": 0.005169429151739903, "sim_render-ego3": 0.005240283937478046, "get_duckie_state": 1.5539888736111834e-06, "in-drivable-lane": 2.3499999999999694, "deviation-heading": 11.992865262354451, "agent_compute-ego0": 0.01878313021695584, "agent_compute-ego1": 0.019627698752207125, "agent_compute-ego2": 0.02145305898763258, "agent_compute-ego3": 0.021692354216563712, "complete-iteration": 0.6095638068689097, "set_robot_commands": 0.002888337063054856, "distance-from-start": 3.212403033012665, "deviation-center-line": 3.943637776563304, "driven_lanedir_consec": 23.85919457061423, "sim_compute_sim_state": 0.023954577886691003, "sim_compute_performance-ego0": 0.0021937566434811, "sim_compute_performance-ego1": 0.0022880703483791178, "sim_compute_performance-ego2": 0.0027021224254573217, "sim_compute_performance-ego3": 0.002645371855545997}}
set_robot_commands_max0.002888337063054856
set_robot_commands_mean0.002888337063054856
set_robot_commands_median0.002888337063054856
set_robot_commands_min0.002888337063054856
sim_compute_performance-ego0_max0.0021937566434811
sim_compute_performance-ego0_mean0.0021937566434811
sim_compute_performance-ego0_median0.0021937566434811
sim_compute_performance-ego0_min0.0021937566434811
sim_compute_performance-ego1_max0.0022880703483791178
sim_compute_performance-ego1_mean0.0022880703483791178
sim_compute_performance-ego1_median0.0022880703483791178
sim_compute_performance-ego1_min0.0022880703483791178
sim_compute_performance-ego2_max0.0027021224254573217
sim_compute_performance-ego2_mean0.0027021224254573217
sim_compute_performance-ego2_median0.0027021224254573217
sim_compute_performance-ego2_min0.0027021224254573217
sim_compute_performance-ego3_max0.002645371855545997
sim_compute_performance-ego3_mean0.002645371855545997
sim_compute_performance-ego3_median0.002645371855545997
sim_compute_performance-ego3_min0.002645371855545997
sim_compute_sim_state_max0.023954577886691003
sim_compute_sim_state_mean0.023954577886691003
sim_compute_sim_state_median0.023954577886691003
sim_compute_sim_state_min0.023954577886691003
sim_render-ego0_max0.003982898297655295
sim_render-ego0_mean0.003982898297655295
sim_render-ego0_median0.003982898297655295
sim_render-ego0_min0.003982898297655295
sim_render-ego1_max0.004863042617022842
sim_render-ego1_mean0.004863042617022842
sim_render-ego1_median0.004863042617022842
sim_render-ego1_min0.004863042617022842
sim_render-ego2_max0.005169429151739903
sim_render-ego2_mean0.005169429151739903
sim_render-ego2_median0.005169429151739903
sim_render-ego2_min0.005169429151739903
sim_render-ego3_max0.005240283937478046
sim_render-ego3_mean0.005240283937478046
sim_render-ego3_median0.005240283937478046
sim_render-ego3_min0.005240283937478046
simulation-passed1
step_physics_max0.4057667231579606
step_physics_mean0.4057667231579606
step_physics_median0.4057667231579606
step_physics_min0.4057667231579606
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7304413605Andras Beresfsf+ilaido-LFVI-sim-testingsim-3of4successnogpu-production-spot-0-010:01:05
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7299713645Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFI-sim-validationsim-0of4successnogpu-production-spot-0-010:02:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.09999999999999
in-drivable-lane_median2.5499999999999923
driven_lanedir_consec_median0.8232428017983384
deviation-center-line_median0.15995774305273336


other stats
agent_compute-ego0_max0.016515967915359052
agent_compute-ego0_mean0.016515967915359052
agent_compute-ego0_median0.016515967915359052
agent_compute-ego0_min0.016515967915359052
complete-iteration_max0.20635100938741444
complete-iteration_mean0.20635100938741444
complete-iteration_median0.20635100938741444
complete-iteration_min0.20635100938741444
deviation-center-line_max0.15995774305273336
deviation-center-line_mean0.15995774305273336
deviation-center-line_min0.15995774305273336
deviation-heading_max0.9366225251819636
deviation-heading_mean0.9366225251819636
deviation-heading_median0.9366225251819636
deviation-heading_min0.9366225251819636
distance-from-start_max1.67393315062867
distance-from-start_mean1.67393315062867
distance-from-start_median1.67393315062867
distance-from-start_min1.67393315062867
driven_any_max1.7131248533562276
driven_any_mean1.7131248533562276
driven_any_median1.7131248533562276
driven_any_min1.7131248533562276
driven_lanedir_consec_max0.8232428017983384
driven_lanedir_consec_mean0.8232428017983384
driven_lanedir_consec_min0.8232428017983384
driven_lanedir_max0.8232428017983384
driven_lanedir_mean0.8232428017983384
driven_lanedir_median0.8232428017983384
driven_lanedir_min0.8232428017983384
get_duckie_state_max1.8124441498691595e-06
get_duckie_state_mean1.8124441498691595e-06
get_duckie_state_median1.8124441498691595e-06
get_duckie_state_min1.8124441498691595e-06
get_robot_state_max0.0038771837660409873
get_robot_state_mean0.0038771837660409873
get_robot_state_median0.0038771837660409873
get_robot_state_min0.0038771837660409873
get_state_dump_max0.0050031837907809655
get_state_dump_mean0.0050031837907809655
get_state_dump_median0.0050031837907809655
get_state_dump_min0.0050031837907809655
get_ui_image_max0.03128418181706401
get_ui_image_mean0.03128418181706401
get_ui_image_median0.03128418181706401
get_ui_image_min0.03128418181706401
in-drivable-lane_max2.5499999999999923
in-drivable-lane_mean2.5499999999999923
in-drivable-lane_min2.5499999999999923
per-episodes
details{"LFI-norm-4way-000-ego0": {"driven_any": 1.7131248533562276, "get_ui_image": 0.03128418181706401, "step_physics": 0.1306988429097296, "survival_time": 5.09999999999999, "driven_lanedir": 0.8232428017983384, "get_state_dump": 0.0050031837907809655, "get_robot_state": 0.0038771837660409873, "sim_render-ego0": 0.003825412213223652, "get_duckie_state": 1.8124441498691595e-06, "in-drivable-lane": 2.5499999999999923, "deviation-heading": 0.9366225251819636, "agent_compute-ego0": 0.016515967915359052, "complete-iteration": 0.20635100938741444, "set_robot_commands": 0.002417587539524708, "distance-from-start": 1.67393315062867, "deviation-center-line": 0.15995774305273336, "driven_lanedir_consec": 0.8232428017983384, "sim_compute_sim_state": 0.010515884288306377, "sim_compute_performance-ego0": 0.0021156986940254288}}
set_robot_commands_max0.002417587539524708
set_robot_commands_mean0.002417587539524708
set_robot_commands_median0.002417587539524708
set_robot_commands_min0.002417587539524708
sim_compute_performance-ego0_max0.0021156986940254288
sim_compute_performance-ego0_mean0.0021156986940254288
sim_compute_performance-ego0_median0.0021156986940254288
sim_compute_performance-ego0_min0.0021156986940254288
sim_compute_sim_state_max0.010515884288306377
sim_compute_sim_state_mean0.010515884288306377
sim_compute_sim_state_median0.010515884288306377
sim_compute_sim_state_min0.010515884288306377
sim_render-ego0_max0.003825412213223652
sim_render-ego0_mean0.003825412213223652
sim_render-ego0_median0.003825412213223652
sim_render-ego0_min0.003825412213223652
simulation-passed1
step_physics_max0.1306988429097296
step_physics_mean0.1306988429097296
step_physics_median0.1306988429097296
step_physics_min0.1306988429097296
survival_time_max5.09999999999999
survival_time_mean5.09999999999999
survival_time_min5.09999999999999
No reset possible
7293413630Raphael Jeanmobile-segmentationaido-LFI-sim-validationsim-1of4successnogpu-production-spot-0-010:03:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median10.90000000000002
in-drivable-lane_median2.549999999999999
driven_lanedir_consec_median3.0046083568372204
deviation-center-line_median0.6262117349081998


other stats
agent_compute-ego0_max0.018742689803310727
agent_compute-ego0_mean0.018742689803310727
agent_compute-ego0_median0.018742689803310727
agent_compute-ego0_min0.018742689803310727
complete-iteration_max0.27477784461626725
complete-iteration_mean0.27477784461626725
complete-iteration_median0.27477784461626725
complete-iteration_min0.27477784461626725
deviation-center-line_max0.6262117349081998
deviation-center-line_mean0.6262117349081998
deviation-center-line_min0.6262117349081998
deviation-heading_max3.5858430126670746
deviation-heading_mean3.5858430126670746
deviation-heading_median3.5858430126670746
deviation-heading_min3.5858430126670746
distance-from-start_max2.252862109297907
distance-from-start_mean2.252862109297907
distance-from-start_median2.252862109297907
distance-from-start_min2.252862109297907
driven_any_max4.3412182399326245
driven_any_mean4.3412182399326245
driven_any_median4.3412182399326245
driven_any_min4.3412182399326245
driven_lanedir_consec_max3.0046083568372204
driven_lanedir_consec_mean3.0046083568372204
driven_lanedir_consec_min3.0046083568372204
driven_lanedir_max3.0054181089925898
driven_lanedir_mean3.0054181089925898
driven_lanedir_median3.0054181089925898
driven_lanedir_min3.0054181089925898
get_duckie_state_max1.6961467864850884e-06
get_duckie_state_mean1.6961467864850884e-06
get_duckie_state_median1.6961467864850884e-06
get_duckie_state_min1.6961467864850884e-06
get_robot_state_max0.004184287432666239
get_robot_state_mean0.004184287432666239
get_robot_state_median0.004184287432666239
get_robot_state_min0.004184287432666239
get_state_dump_max0.005728325343023152
get_state_dump_mean0.005728325343023152
get_state_dump_median0.005728325343023152
get_state_dump_min0.005728325343023152
get_ui_image_max0.03412403254748479
get_ui_image_mean0.03412403254748479
get_ui_image_median0.03412403254748479
get_ui_image_min0.03412403254748479
in-drivable-lane_max2.549999999999999
in-drivable-lane_mean2.549999999999999
in-drivable-lane_min2.549999999999999
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 4.3412182399326245, "get_ui_image": 0.03412403254748479, "step_physics": 0.18990948102245592, "survival_time": 10.90000000000002, "driven_lanedir": 3.0054181089925898, "get_state_dump": 0.005728325343023152, "get_robot_state": 0.004184287432666239, "sim_render-ego0": 0.004147954183082058, "get_duckie_state": 1.6961467864850884e-06, "in-drivable-lane": 2.549999999999999, "deviation-heading": 3.5858430126670746, "agent_compute-ego0": 0.018742689803310727, "complete-iteration": 0.27477784461626725, "set_robot_commands": 0.002694914874420862, "distance-from-start": 2.252862109297907, "deviation-center-line": 0.6262117349081998, "driven_lanedir_consec": 3.0046083568372204, "sim_compute_sim_state": 0.012810112678841369, "sim_compute_performance-ego0": 0.0023183691991518623}}
set_robot_commands_max0.002694914874420862
set_robot_commands_mean0.002694914874420862
set_robot_commands_median0.002694914874420862
set_robot_commands_min0.002694914874420862
sim_compute_performance-ego0_max0.0023183691991518623
sim_compute_performance-ego0_mean0.0023183691991518623
sim_compute_performance-ego0_median0.0023183691991518623
sim_compute_performance-ego0_min0.0023183691991518623
sim_compute_sim_state_max0.012810112678841369
sim_compute_sim_state_mean0.012810112678841369
sim_compute_sim_state_median0.012810112678841369
sim_compute_sim_state_min0.012810112678841369
sim_render-ego0_max0.004147954183082058
sim_render-ego0_mean0.004147954183082058
sim_render-ego0_median0.004147954183082058
sim_render-ego0_min0.004147954183082058
simulation-passed1
step_physics_max0.18990948102245592
step_physics_mean0.18990948102245592
step_physics_median0.18990948102245592
step_physics_min0.18990948102245592
survival_time_max10.90000000000002
survival_time_mean10.90000000000002
survival_time_min10.90000000000002
No reset possible
7287613636Raphael Jeanmobile-segmentationaido-LFVI-sim-validationsim-1of4successnogpu-production-spot-0-010:02:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.349999999999996
in-drivable-lane_median0.14999999999999947
driven_lanedir_consec_median0.7062939907389563
deviation-center-line_median0.20112987674979793


other stats
agent_compute-ego0_max0.020080103593714097
agent_compute-ego0_mean0.020080103593714097
agent_compute-ego0_median0.020080103593714097
agent_compute-ego0_min0.020080103593714097
agent_compute-npc0_max0.05251753330230713
agent_compute-npc0_mean0.05251753330230713
agent_compute-npc0_median0.05251753330230713
agent_compute-npc0_min0.05251753330230713
agent_compute-npc1_max0.05185578149907729
agent_compute-npc1_mean0.05185578149907729
agent_compute-npc1_median0.05185578149907729
agent_compute-npc1_min0.05185578149907729
agent_compute-npc2_max0.05169670371448293
agent_compute-npc2_mean0.05169670371448293
agent_compute-npc2_median0.05169670371448293
agent_compute-npc2_min0.05169670371448293
complete-iteration_max0.6439165578168982
complete-iteration_mean0.6439165578168982
complete-iteration_median0.6439165578168982
complete-iteration_min0.6439165578168982
deviation-center-line_max0.20112987674979793
deviation-center-line_mean0.20112987674979793
deviation-center-line_min0.20112987674979793
deviation-heading_max2.227954545719922
deviation-heading_mean2.227954545719922
deviation-heading_median2.227954545719922
deviation-heading_min2.227954545719922
distance-from-start_max0.9793952686810682
distance-from-start_mean0.9793952686810682
distance-from-start_median0.9793952686810682
distance-from-start_min0.9793952686810682
driven_any_max1.0159212416835397
driven_any_mean1.0159212416835397
driven_any_median1.0159212416835397
driven_any_min1.0159212416835397
driven_lanedir_consec_max0.7062939907389563
driven_lanedir_consec_mean0.7062939907389563
driven_lanedir_consec_min0.7062939907389563
driven_lanedir_max0.7367540057012216
driven_lanedir_mean0.7367540057012216
driven_lanedir_median0.7367540057012216
driven_lanedir_min0.7367540057012216
get_duckie_state_max2.103693345013787e-06
get_duckie_state_mean2.103693345013787e-06
get_duckie_state_median2.103693345013787e-06
get_duckie_state_min2.103693345013787e-06
get_robot_state_max0.01743962484247544
get_robot_state_mean0.01743962484247544
get_robot_state_median0.01743962484247544
get_robot_state_min0.01743962484247544
get_state_dump_max0.011418665156644933
get_state_dump_mean0.011418665156644933
get_state_dump_median0.011418665156644933
get_state_dump_min0.011418665156644933
get_ui_image_max0.03576612121918622
get_ui_image_mean0.03576612121918622
get_ui_image_median0.03576612121918622
get_ui_image_min0.03576612121918622
in-drivable-lane_max0.14999999999999947
in-drivable-lane_mean0.14999999999999947
in-drivable-lane_min0.14999999999999947
per-episodes
details{"LFVI-norm-4way-000-ego0": {"driven_any": 1.0159212416835397, "get_ui_image": 0.03576612121918622, "step_physics": 0.31833013015634876, "survival_time": 3.349999999999996, "driven_lanedir": 0.7367540057012216, "get_state_dump": 0.011418665156644933, "get_robot_state": 0.01743962484247544, "sim_render-ego0": 0.00454490675645716, "sim_render-npc0": 0.0049465649268206425, "sim_render-npc1": 0.004871582283693201, "sim_render-npc2": 0.004679998930762796, "get_duckie_state": 2.103693345013787e-06, "in-drivable-lane": 0.14999999999999947, "deviation-heading": 2.227954545719922, "agent_compute-ego0": 0.020080103593714097, "agent_compute-npc0": 0.05251753330230713, "agent_compute-npc1": 0.05185578149907729, "agent_compute-npc2": 0.05169670371448293, "complete-iteration": 0.6439165578168982, "set_robot_commands": 0.002920704729416791, "distance-from-start": 0.9793952686810682, "deviation-center-line": 0.20112987674979793, "driven_lanedir_consec": 0.7062939907389563, "sim_compute_sim_state": 0.04354528118582333, "sim_compute_performance-ego0": 0.002664580064661363, "sim_compute_performance-npc0": 0.0024825159241171446, "sim_compute_performance-npc1": 0.0026281195528366987, "sim_compute_performance-npc2": 0.0025451428749982048}}
set_robot_commands_max0.002920704729416791
set_robot_commands_mean0.002920704729416791
set_robot_commands_median0.002920704729416791
set_robot_commands_min0.002920704729416791
sim_compute_performance-ego0_max0.002664580064661363
sim_compute_performance-ego0_mean0.002664580064661363
sim_compute_performance-ego0_median0.002664580064661363
sim_compute_performance-ego0_min0.002664580064661363
sim_compute_performance-npc0_max0.0024825159241171446
sim_compute_performance-npc0_mean0.0024825159241171446
sim_compute_performance-npc0_median0.0024825159241171446
sim_compute_performance-npc0_min0.0024825159241171446
sim_compute_performance-npc1_max0.0026281195528366987
sim_compute_performance-npc1_mean0.0026281195528366987
sim_compute_performance-npc1_median0.0026281195528366987
sim_compute_performance-npc1_min0.0026281195528366987
sim_compute_performance-npc2_max0.0025451428749982048
sim_compute_performance-npc2_mean0.0025451428749982048
sim_compute_performance-npc2_median0.0025451428749982048
sim_compute_performance-npc2_min0.0025451428749982048
sim_compute_sim_state_max0.04354528118582333
sim_compute_sim_state_mean0.04354528118582333
sim_compute_sim_state_median0.04354528118582333
sim_compute_sim_state_min0.04354528118582333
sim_render-ego0_max0.00454490675645716
sim_render-ego0_mean0.00454490675645716
sim_render-ego0_median0.00454490675645716
sim_render-ego0_min0.00454490675645716
sim_render-npc0_max0.0049465649268206425
sim_render-npc0_mean0.0049465649268206425
sim_render-npc0_median0.0049465649268206425
sim_render-npc0_min0.0049465649268206425
sim_render-npc1_max0.004871582283693201
sim_render-npc1_mean0.004871582283693201
sim_render-npc1_median0.004871582283693201
sim_render-npc1_min0.004871582283693201
sim_render-npc2_max0.004679998930762796
sim_render-npc2_mean0.004679998930762796
sim_render-npc2_median0.004679998930762796
sim_render-npc2_min0.004679998930762796
simulation-passed1
step_physics_max0.31833013015634876
step_physics_mean0.31833013015634876
step_physics_median0.31833013015634876
step_physics_min0.31833013015634876
survival_time_max3.349999999999996
survival_time_mean3.349999999999996
survival_time_min3.349999999999996
No reset possible
7285013603Andras Beresfsf+ilaido-LFV-sim-testingsim-2of4successnogpu-production-spot-0-010:01:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median1.950000000000001
in-drivable-lane_median0.0
driven_lanedir_consec_median0.49025535974196544
deviation-center-line_median0.0666223519118146


other stats
agent_compute-ego0_max0.018303096294403076
agent_compute-ego0_mean0.018303096294403076
agent_compute-ego0_median0.018303096294403076
agent_compute-ego0_min0.018303096294403076
agent_compute-npc0_max0.052813148498535155
agent_compute-npc0_mean0.052813148498535155
agent_compute-npc0_median0.052813148498535155
agent_compute-npc0_min0.052813148498535155
agent_compute-npc1_max0.05182775855064392
agent_compute-npc1_mean0.05182775855064392
agent_compute-npc1_median0.05182775855064392
agent_compute-npc1_min0.05182775855064392
agent_compute-npc2_max0.05349888801574707
agent_compute-npc2_mean0.05349888801574707
agent_compute-npc2_median0.05349888801574707
agent_compute-npc2_min0.05349888801574707
complete-iteration_max0.48012850880622865
complete-iteration_mean0.48012850880622865
complete-iteration_median0.48012850880622865
complete-iteration_min0.48012850880622865
deviation-center-line_max0.0666223519118146
deviation-center-line_mean0.0666223519118146
deviation-center-line_min0.0666223519118146
deviation-heading_max0.1955198976408657
deviation-heading_mean0.1955198976408657
deviation-heading_median0.1955198976408657
deviation-heading_min0.1955198976408657
distance-from-start_max0.4920253919309538
distance-from-start_mean0.4920253919309538
distance-from-start_median0.4920253919309538
distance-from-start_min0.4920253919309538
driven_any_max0.49422477867086406
driven_any_mean0.49422477867086406
driven_any_median0.49422477867086406
driven_any_min0.49422477867086406
driven_lanedir_consec_max0.49025535974196544
driven_lanedir_consec_mean0.49025535974196544
driven_lanedir_consec_min0.49025535974196544
driven_lanedir_max0.49025535974196544
driven_lanedir_mean0.49025535974196544
driven_lanedir_median0.49025535974196544
driven_lanedir_min0.49025535974196544
get_duckie_state_max1.7642974853515626e-06
get_duckie_state_mean1.7642974853515626e-06
get_duckie_state_median1.7642974853515626e-06
get_duckie_state_min1.7642974853515626e-06
get_robot_state_max0.015348422527313232
get_robot_state_mean0.015348422527313232
get_robot_state_median0.015348422527313232
get_robot_state_min0.015348422527313232
get_state_dump_max0.01049821376800537
get_state_dump_mean0.01049821376800537
get_state_dump_median0.01049821376800537
get_state_dump_min0.01049821376800537
get_ui_image_max0.02309783101081848
get_ui_image_mean0.02309783101081848
get_ui_image_median0.02309783101081848
get_ui_image_min0.02309783101081848
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 0.49422477867086406, "get_ui_image": 0.02309783101081848, "step_physics": 0.1933164119720459, "survival_time": 1.950000000000001, "driven_lanedir": 0.49025535974196544, "get_state_dump": 0.01049821376800537, "get_robot_state": 0.015348422527313232, "sim_render-ego0": 0.004012900590896607, "sim_render-npc0": 0.004332786798477173, "sim_render-npc1": 0.004483479261398316, "sim_render-npc2": 0.00431552529335022, "get_duckie_state": 1.7642974853515626e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.1955198976408657, "agent_compute-ego0": 0.018303096294403076, "agent_compute-npc0": 0.052813148498535155, "agent_compute-npc1": 0.05182775855064392, "agent_compute-npc2": 0.05349888801574707, "complete-iteration": 0.48012850880622865, "set_robot_commands": 0.0026149868965148924, "distance-from-start": 0.4920253919309538, "deviation-center-line": 0.0666223519118146, "driven_lanedir_consec": 0.49025535974196544, "sim_compute_sim_state": 0.02401961088180542, "sim_compute_performance-ego0": 0.002234166860580444, "sim_compute_performance-npc0": 0.0024147331714630127, "sim_compute_performance-npc1": 0.0023004293441772463, "sim_compute_performance-npc2": 0.0024275362491607668}}
set_robot_commands_max0.0026149868965148924
set_robot_commands_mean0.0026149868965148924
set_robot_commands_median0.0026149868965148924
set_robot_commands_min0.0026149868965148924
sim_compute_performance-ego0_max0.002234166860580444
sim_compute_performance-ego0_mean0.002234166860580444
sim_compute_performance-ego0_median0.002234166860580444
sim_compute_performance-ego0_min0.002234166860580444
sim_compute_performance-npc0_max0.0024147331714630127
sim_compute_performance-npc0_mean0.0024147331714630127
sim_compute_performance-npc0_median0.0024147331714630127
sim_compute_performance-npc0_min0.0024147331714630127
sim_compute_performance-npc1_max0.0023004293441772463
sim_compute_performance-npc1_mean0.0023004293441772463
sim_compute_performance-npc1_median0.0023004293441772463
sim_compute_performance-npc1_min0.0023004293441772463
sim_compute_performance-npc2_max0.0024275362491607668
sim_compute_performance-npc2_mean0.0024275362491607668
sim_compute_performance-npc2_median0.0024275362491607668
sim_compute_performance-npc2_min0.0024275362491607668
sim_compute_sim_state_max0.02401961088180542
sim_compute_sim_state_mean0.02401961088180542
sim_compute_sim_state_median0.02401961088180542
sim_compute_sim_state_min0.02401961088180542
sim_render-ego0_max0.004012900590896607
sim_render-ego0_mean0.004012900590896607
sim_render-ego0_median0.004012900590896607
sim_render-ego0_min0.004012900590896607
sim_render-npc0_max0.004332786798477173
sim_render-npc0_mean0.004332786798477173
sim_render-npc0_median0.004332786798477173
sim_render-npc0_min0.004332786798477173
sim_render-npc1_max0.004483479261398316
sim_render-npc1_mean0.004483479261398316
sim_render-npc1_median0.004483479261398316
sim_render-npc1_min0.004483479261398316
sim_render-npc2_max0.00431552529335022
sim_render-npc2_mean0.00431552529335022
sim_render-npc2_median0.00431552529335022
sim_render-npc2_min0.00431552529335022
simulation-passed1
step_physics_max0.1933164119720459
step_physics_mean0.1933164119720459
step_physics_median0.1933164119720459
step_physics_min0.1933164119720459
survival_time_max1.950000000000001
survival_time_mean1.950000000000001
survival_time_min1.950000000000001
No reset possible
7279313648Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFV-sim-testingsim-1of4successnogpu-production-spot-0-010:04:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.599999999999981
in-drivable-lane_median1.1999999999999955
driven_lanedir_consec_median2.20026609783568
deviation-center-line_median0.5349419549975512


other stats
agent_compute-ego0_max0.01980002721150716
agent_compute-ego0_mean0.01980002721150716
agent_compute-ego0_median0.01980002721150716
agent_compute-ego0_min0.01980002721150716
agent_compute-npc0_max0.06620172425812366
agent_compute-npc0_mean0.06620172425812366
agent_compute-npc0_median0.06620172425812366
agent_compute-npc0_min0.06620172425812366
agent_compute-npc1_max0.06647780829784918
agent_compute-npc1_mean0.06647780829784918
agent_compute-npc1_median0.06647780829784918
agent_compute-npc1_min0.06647780829784918
agent_compute-npc2_max0.06529169612460667
agent_compute-npc2_mean0.06529169612460667
agent_compute-npc2_median0.06529169612460667
agent_compute-npc2_min0.06529169612460667
agent_compute-npc3_max0.06490570891137216
agent_compute-npc3_mean0.06490570891137216
agent_compute-npc3_median0.06490570891137216
agent_compute-npc3_min0.06490570891137216
complete-iteration_max0.8256507839252746
complete-iteration_mean0.8256507839252746
complete-iteration_median0.8256507839252746
complete-iteration_min0.8256507839252746
deviation-center-line_max0.5349419549975512
deviation-center-line_mean0.5349419549975512
deviation-center-line_min0.5349419549975512
deviation-heading_max2.1825921672410327
deviation-heading_mean2.1825921672410327
deviation-heading_median2.1825921672410327
deviation-heading_min2.1825921672410327
distance-from-start_max1.288689158758821
distance-from-start_mean1.288689158758821
distance-from-start_median1.288689158758821
distance-from-start_min1.288689158758821
driven_any_max2.86269071221611
driven_any_mean2.86269071221611
driven_any_median2.86269071221611
driven_any_min2.86269071221611
driven_lanedir_consec_max2.20026609783568
driven_lanedir_consec_mean2.20026609783568
driven_lanedir_consec_min2.20026609783568
driven_lanedir_max2.20026609783568
driven_lanedir_mean2.20026609783568
driven_lanedir_median2.20026609783568
driven_lanedir_min2.20026609783568
get_duckie_state_max1.975913452946283e-06
get_duckie_state_mean1.975913452946283e-06
get_duckie_state_median1.975913452946283e-06
get_duckie_state_min1.975913452946283e-06
get_robot_state_max0.021403403064004737
get_robot_state_mean0.021403403064004737
get_robot_state_median0.021403403064004737
get_robot_state_min0.021403403064004737
get_state_dump_max0.013514515621210236
get_state_dump_mean0.013514515621210236
get_state_dump_median0.013514515621210236
get_state_dump_min0.013514515621210236
get_ui_image_max0.029990838244070415
get_ui_image_mean0.029990838244070415
get_ui_image_median0.029990838244070415
get_ui_image_min0.029990838244070415
in-drivable-lane_max1.1999999999999955
in-drivable-lane_mean1.1999999999999955
in-drivable-lane_min1.1999999999999955
per-episodes
details{"LFV-norm-zigzag-000-ego0": {"driven_any": 2.86269071221611, "get_ui_image": 0.029990838244070415, "step_physics": 0.3709873763564365, "survival_time": 7.599999999999981, "driven_lanedir": 2.20026609783568, "get_state_dump": 0.013514515621210236, "get_robot_state": 0.021403403064004737, "sim_render-ego0": 0.0043348767399008755, "sim_render-npc0": 0.0052277636683844275, "sim_render-npc1": 0.004890543183470084, "sim_render-npc2": 0.00460973440432081, "sim_render-npc3": 0.00453144740435033, "get_duckie_state": 1.975913452946283e-06, "in-drivable-lane": 1.1999999999999955, "deviation-heading": 2.1825921672410327, "agent_compute-ego0": 0.01980002721150716, "agent_compute-npc0": 0.06620172425812366, "agent_compute-npc1": 0.06647780829784918, "agent_compute-npc2": 0.06529169612460667, "agent_compute-npc3": 0.06490570891137216, "complete-iteration": 0.8256507839252746, "set_robot_commands": 0.0028985902374865963, "distance-from-start": 1.288689158758821, "deviation-center-line": 0.5349419549975512, "driven_lanedir_consec": 2.20026609783568, "sim_compute_sim_state": 0.055436477162479576, "sim_compute_performance-ego0": 0.0025446944766574437, "sim_compute_performance-npc0": 0.0024974268246320337, "sim_compute_performance-npc1": 0.0027024247287924775, "sim_compute_performance-npc2": 0.002545392591189715, "sim_compute_performance-npc3": 0.0024975920035169015}}
set_robot_commands_max0.0028985902374865963
set_robot_commands_mean0.0028985902374865963
set_robot_commands_median0.0028985902374865963
set_robot_commands_min0.0028985902374865963
sim_compute_performance-ego0_max0.0025446944766574437
sim_compute_performance-ego0_mean0.0025446944766574437
sim_compute_performance-ego0_median0.0025446944766574437
sim_compute_performance-ego0_min0.0025446944766574437
sim_compute_performance-npc0_max0.0024974268246320337
sim_compute_performance-npc0_mean0.0024974268246320337
sim_compute_performance-npc0_median0.0024974268246320337
sim_compute_performance-npc0_min0.0024974268246320337
sim_compute_performance-npc1_max0.0027024247287924775
sim_compute_performance-npc1_mean0.0027024247287924775
sim_compute_performance-npc1_median0.0027024247287924775
sim_compute_performance-npc1_min0.0027024247287924775
sim_compute_performance-npc2_max0.002545392591189715
sim_compute_performance-npc2_mean0.002545392591189715
sim_compute_performance-npc2_median0.002545392591189715
sim_compute_performance-npc2_min0.002545392591189715
sim_compute_performance-npc3_max0.0024975920035169015
sim_compute_performance-npc3_mean0.0024975920035169015
sim_compute_performance-npc3_median0.0024975920035169015
sim_compute_performance-npc3_min0.0024975920035169015
sim_compute_sim_state_max0.055436477162479576
sim_compute_sim_state_mean0.055436477162479576
sim_compute_sim_state_median0.055436477162479576
sim_compute_sim_state_min0.055436477162479576
sim_render-ego0_max0.0043348767399008755
sim_render-ego0_mean0.0043348767399008755
sim_render-ego0_median0.0043348767399008755
sim_render-ego0_min0.0043348767399008755
sim_render-npc0_max0.0052277636683844275
sim_render-npc0_mean0.0052277636683844275
sim_render-npc0_median0.0052277636683844275
sim_render-npc0_min0.0052277636683844275
sim_render-npc1_max0.004890543183470084
sim_render-npc1_mean0.004890543183470084
sim_render-npc1_median0.004890543183470084
sim_render-npc1_min0.004890543183470084
sim_render-npc2_max0.00460973440432081
sim_render-npc2_mean0.00460973440432081
sim_render-npc2_median0.00460973440432081
sim_render-npc2_min0.00460973440432081
sim_render-npc3_max0.00453144740435033
sim_render-npc3_mean0.00453144740435033
sim_render-npc3_median0.00453144740435033
sim_render-npc3_min0.00453144740435033
simulation-passed1
step_physics_max0.3709873763564365
step_physics_mean0.3709873763564365
step_physics_median0.3709873763564365
step_physics_min0.3709873763564365
survival_time_max7.599999999999981
survival_time_mean7.599999999999981
survival_time_min7.599999999999981
No reset possible
7275613603Andras Beresfsf+ilaido-LFV-sim-testingsim-3of4successnogpu-production-spot-0-010:02:26
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.349999999999996
in-drivable-lane_median0.0
driven_lanedir_consec_median1.1786484083677446
deviation-center-line_median0.3409021168681383


other stats
agent_compute-ego0_max0.019500981358920828
agent_compute-ego0_mean0.019500981358920828
agent_compute-ego0_median0.019500981358920828
agent_compute-ego0_min0.019500981358920828
agent_compute-npc0_max0.05198716065462898
agent_compute-npc0_mean0.05198716065462898
agent_compute-npc0_median0.05198716065462898
agent_compute-npc0_min0.05198716065462898
agent_compute-npc1_max0.054009974002838135
agent_compute-npc1_mean0.054009974002838135
agent_compute-npc1_median0.054009974002838135
agent_compute-npc1_min0.054009974002838135
agent_compute-npc2_max0.05455442035899443
agent_compute-npc2_mean0.05455442035899443
agent_compute-npc2_median0.05455442035899443
agent_compute-npc2_min0.05455442035899443
agent_compute-npc3_max0.05550608564825619
agent_compute-npc3_mean0.05550608564825619
agent_compute-npc3_median0.05550608564825619
agent_compute-npc3_min0.05550608564825619
complete-iteration_max0.6716703527113971
complete-iteration_mean0.6716703527113971
complete-iteration_median0.6716703527113971
complete-iteration_min0.6716703527113971
deviation-center-line_max0.3409021168681383
deviation-center-line_mean0.3409021168681383
deviation-center-line_min0.3409021168681383
deviation-heading_max0.3659282914360968
deviation-heading_mean0.3659282914360968
deviation-heading_median0.3659282914360968
deviation-heading_min0.3659282914360968
distance-from-start_max0.9588605444453936
distance-from-start_mean0.9588605444453936
distance-from-start_median0.9588605444453936
distance-from-start_min0.9588605444453936
driven_any_max1.190067769083098
driven_any_mean1.190067769083098
driven_any_median1.190067769083098
driven_any_min1.190067769083098
driven_lanedir_consec_max1.1786484083677446
driven_lanedir_consec_mean1.1786484083677446
driven_lanedir_consec_min1.1786484083677446
driven_lanedir_max1.1786484083677446
driven_lanedir_mean1.1786484083677446
driven_lanedir_median1.1786484083677446
driven_lanedir_min1.1786484083677446
get_duckie_state_max1.9353978774126837e-06
get_duckie_state_mean1.9353978774126837e-06
get_duckie_state_median1.9353978774126837e-06
get_duckie_state_min1.9353978774126837e-06
get_robot_state_max0.020031501265133127
get_robot_state_mean0.020031501265133127
get_robot_state_median0.020031501265133127
get_robot_state_min0.020031501265133127
get_state_dump_max0.013284045107224408
get_state_dump_mean0.013284045107224408
get_state_dump_median0.013284045107224408
get_state_dump_min0.013284045107224408
get_ui_image_max0.026912289507248825
get_ui_image_mean0.026912289507248825
get_ui_image_median0.026912289507248825
get_ui_image_min0.026912289507248825
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 1.190067769083098, "get_ui_image": 0.026912289507248825, "step_physics": 0.28394264684003945, "survival_time": 3.349999999999996, "driven_lanedir": 1.1786484083677446, "get_state_dump": 0.013284045107224408, "get_robot_state": 0.020031501265133127, "sim_render-ego0": 0.00420446606243358, "sim_render-npc0": 0.003978333052466898, "sim_render-npc1": 0.004173233228571275, "sim_render-npc2": 0.004459752756006578, "sim_render-npc3": 0.004385664182550767, "get_duckie_state": 1.9353978774126837e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.3659282914360968, "agent_compute-ego0": 0.019500981358920828, "agent_compute-npc0": 0.05198716065462898, "agent_compute-npc1": 0.054009974002838135, "agent_compute-npc2": 0.05455442035899443, "agent_compute-npc3": 0.05550608564825619, "complete-iteration": 0.6716703527113971, "set_robot_commands": 0.002792835235595703, "distance-from-start": 0.9588605444453936, "deviation-center-line": 0.3409021168681383, "driven_lanedir_consec": 1.1786484083677446, "sim_compute_sim_state": 0.04480066018946031, "sim_compute_performance-ego0": 0.002478676683762494, "sim_compute_performance-npc0": 0.0022459906690260943, "sim_compute_performance-npc1": 0.00225985050201416, "sim_compute_performance-npc2": 0.0023946621838737935, "sim_compute_performance-npc3": 0.002381121411043055}}
set_robot_commands_max0.002792835235595703
set_robot_commands_mean0.002792835235595703
set_robot_commands_median0.002792835235595703
set_robot_commands_min0.002792835235595703
sim_compute_performance-ego0_max0.002478676683762494
sim_compute_performance-ego0_mean0.002478676683762494
sim_compute_performance-ego0_median0.002478676683762494
sim_compute_performance-ego0_min0.002478676683762494
sim_compute_performance-npc0_max0.0022459906690260943
sim_compute_performance-npc0_mean0.0022459906690260943
sim_compute_performance-npc0_median0.0022459906690260943
sim_compute_performance-npc0_min0.0022459906690260943
sim_compute_performance-npc1_max0.00225985050201416
sim_compute_performance-npc1_mean0.00225985050201416
sim_compute_performance-npc1_median0.00225985050201416
sim_compute_performance-npc1_min0.00225985050201416
sim_compute_performance-npc2_max0.0023946621838737935
sim_compute_performance-npc2_mean0.0023946621838737935
sim_compute_performance-npc2_median0.0023946621838737935
sim_compute_performance-npc2_min0.0023946621838737935
sim_compute_performance-npc3_max0.002381121411043055
sim_compute_performance-npc3_mean0.002381121411043055
sim_compute_performance-npc3_median0.002381121411043055
sim_compute_performance-npc3_min0.002381121411043055
sim_compute_sim_state_max0.04480066018946031
sim_compute_sim_state_mean0.04480066018946031
sim_compute_sim_state_median0.04480066018946031
sim_compute_sim_state_min0.04480066018946031
sim_render-ego0_max0.00420446606243358
sim_render-ego0_mean0.00420446606243358
sim_render-ego0_median0.00420446606243358
sim_render-ego0_min0.00420446606243358
sim_render-npc0_max0.003978333052466898
sim_render-npc0_mean0.003978333052466898
sim_render-npc0_median0.003978333052466898
sim_render-npc0_min0.003978333052466898
sim_render-npc1_max0.004173233228571275
sim_render-npc1_mean0.004173233228571275
sim_render-npc1_median0.004173233228571275
sim_render-npc1_min0.004173233228571275
sim_render-npc2_max0.004459752756006578
sim_render-npc2_mean0.004459752756006578
sim_render-npc2_median0.004459752756006578
sim_render-npc2_min0.004459752756006578
sim_render-npc3_max0.004385664182550767
sim_render-npc3_mean0.004385664182550767
sim_render-npc3_median0.004385664182550767
sim_render-npc3_min0.004385664182550767
simulation-passed1
step_physics_max0.28394264684003945
step_physics_mean0.28394264684003945
step_physics_median0.28394264684003945
step_physics_min0.28394264684003945
survival_time_max3.349999999999996
survival_time_mean3.349999999999996
survival_time_min3.349999999999996
No reset possible
7274213606Andras Beresfsf+ilaido-LFVI-sim-validationsim-2of4successnogpu-production-spot-0-010:01:08
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7272213606Andras Beresfsf+ilaido-LFVI-sim-validationsim-2of4successnogpu-production-spot-0-010:01:13
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7269813606Andras Beresfsf+ilaido-LFVI-sim-validationsim-2of4successnogpu-production-spot-0-010:01:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
other stats
simulation-passed1
skipped1
No reset possible
7264613618Raphael Jeanmobile-segmentation-pedestrianaido-LFV-sim-validationsim-3of4successnogpu-production-spot-0-010:03:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.349999999999989
in-drivable-lane_median0.4499999999999984
driven_lanedir_consec_median1.5873742376375517
deviation-center-line_median0.2645878097562212


other stats
agent_compute-ego0_max0.018750826517740887
agent_compute-ego0_mean0.018750826517740887
agent_compute-ego0_median0.018750826517740887
agent_compute-ego0_min0.018750826517740887
agent_compute-npc0_max0.04842443157125403
agent_compute-npc0_mean0.04842443157125403
agent_compute-npc0_median0.04842443157125403
agent_compute-npc0_min0.04842443157125403
agent_compute-npc1_max0.04926532065426862
agent_compute-npc1_mean0.04926532065426862
agent_compute-npc1_median0.04926532065426862
agent_compute-npc1_min0.04926532065426862
agent_compute-npc2_max0.047722200552622475
agent_compute-npc2_mean0.047722200552622475
agent_compute-npc2_median0.047722200552622475
agent_compute-npc2_min0.047722200552622475
agent_compute-npc3_max0.048345857196384005
agent_compute-npc3_mean0.048345857196384005
agent_compute-npc3_median0.048345857196384005
agent_compute-npc3_min0.048345857196384005
complete-iteration_max0.7103816933102078
complete-iteration_mean0.7103816933102078
complete-iteration_median0.7103816933102078
complete-iteration_min0.7103816933102078
deviation-center-line_max0.2645878097562212
deviation-center-line_mean0.2645878097562212
deviation-center-line_min0.2645878097562212
deviation-heading_max1.14686564726024
deviation-heading_mean1.14686564726024
deviation-heading_median1.14686564726024
deviation-heading_min1.14686564726024
distance-from-start_max1.541984554261046
distance-from-start_mean1.541984554261046
distance-from-start_median1.541984554261046
distance-from-start_min1.541984554261046
driven_any_max1.8363618414952552
driven_any_mean1.8363618414952552
driven_any_median1.8363618414952552
driven_any_min1.8363618414952552
driven_lanedir_consec_max1.5873742376375517
driven_lanedir_consec_mean1.5873742376375517
driven_lanedir_consec_min1.5873742376375517
driven_lanedir_max1.5873742376375517
driven_lanedir_mean1.5873742376375517
driven_lanedir_median1.5873742376375517
driven_lanedir_min1.5873742376375517
get_duckie_state_max1.6755527920193142e-06
get_duckie_state_mean1.6755527920193142e-06
get_duckie_state_median1.6755527920193142e-06
get_duckie_state_min1.6755527920193142e-06
get_robot_state_max0.01960853073332045
get_robot_state_mean0.01960853073332045
get_robot_state_median0.01960853073332045
get_robot_state_min0.01960853073332045
get_state_dump_max0.012608863689281324
get_state_dump_mean0.012608863689281324
get_state_dump_median0.012608863689281324
get_state_dump_min0.012608863689281324
get_ui_image_max0.03350316815906101
get_ui_image_mean0.03350316815906101
get_ui_image_median0.03350316815906101
get_ui_image_min0.03350316815906101
in-drivable-lane_max0.4499999999999984
in-drivable-lane_mean0.4499999999999984
in-drivable-lane_min0.4499999999999984
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 1.8363618414952552, "get_ui_image": 0.03350316815906101, "step_physics": 0.3337225251727634, "survival_time": 5.349999999999989, "driven_lanedir": 1.5873742376375517, "get_state_dump": 0.012608863689281324, "get_robot_state": 0.01960853073332045, "sim_render-ego0": 0.003890178821705006, "sim_render-npc0": 0.004837400383419461, "sim_render-npc1": 0.004505791046001293, "sim_render-npc2": 0.00433666397024084, "sim_render-npc3": 0.004395926440203631, "get_duckie_state": 1.6755527920193142e-06, "in-drivable-lane": 0.4499999999999984, "deviation-heading": 1.14686564726024, "agent_compute-ego0": 0.018750826517740887, "agent_compute-npc0": 0.04842443157125403, "agent_compute-npc1": 0.04926532065426862, "agent_compute-npc2": 0.047722200552622475, "agent_compute-npc3": 0.048345857196384005, "complete-iteration": 0.7103816933102078, "set_robot_commands": 0.0026828160992375125, "distance-from-start": 1.541984554261046, "deviation-center-line": 0.2645878097562212, "driven_lanedir_consec": 1.5873742376375517, "sim_compute_sim_state": 0.05054322878519694, "sim_compute_performance-ego0": 0.0022466580073038736, "sim_compute_performance-npc0": 0.0023053663748281972, "sim_compute_performance-npc1": 0.002457179405071117, "sim_compute_performance-npc2": 0.002339184284210205, "sim_compute_performance-npc3": 0.0024093985557556152}}
set_robot_commands_max0.0026828160992375125
set_robot_commands_mean0.0026828160992375125
set_robot_commands_median0.0026828160992375125
set_robot_commands_min0.0026828160992375125
sim_compute_performance-ego0_max0.0022466580073038736
sim_compute_performance-ego0_mean0.0022466580073038736
sim_compute_performance-ego0_median0.0022466580073038736
sim_compute_performance-ego0_min0.0022466580073038736
sim_compute_performance-npc0_max0.0023053663748281972
sim_compute_performance-npc0_mean0.0023053663748281972
sim_compute_performance-npc0_median0.0023053663748281972
sim_compute_performance-npc0_min0.0023053663748281972
sim_compute_performance-npc1_max0.002457179405071117
sim_compute_performance-npc1_mean0.002457179405071117
sim_compute_performance-npc1_median0.002457179405071117
sim_compute_performance-npc1_min0.002457179405071117
sim_compute_performance-npc2_max0.002339184284210205
sim_compute_performance-npc2_mean0.002339184284210205
sim_compute_performance-npc2_median0.002339184284210205
sim_compute_performance-npc2_min0.002339184284210205
sim_compute_performance-npc3_max0.0024093985557556152
sim_compute_performance-npc3_mean0.0024093985557556152
sim_compute_performance-npc3_median0.0024093985557556152
sim_compute_performance-npc3_min0.0024093985557556152
sim_compute_sim_state_max0.05054322878519694
sim_compute_sim_state_mean0.05054322878519694
sim_compute_sim_state_median0.05054322878519694
sim_compute_sim_state_min0.05054322878519694
sim_render-ego0_max0.003890178821705006
sim_render-ego0_mean0.003890178821705006
sim_render-ego0_median0.003890178821705006
sim_render-ego0_min0.003890178821705006
sim_render-npc0_max0.004837400383419461
sim_render-npc0_mean0.004837400383419461
sim_render-npc0_median0.004837400383419461
sim_render-npc0_min0.004837400383419461
sim_render-npc1_max0.004505791046001293
sim_render-npc1_mean0.004505791046001293
sim_render-npc1_median0.004505791046001293
sim_render-npc1_min0.004505791046001293
sim_render-npc2_max0.00433666397024084
sim_render-npc2_mean0.00433666397024084
sim_render-npc2_median0.00433666397024084
sim_render-npc2_min0.00433666397024084
sim_render-npc3_max0.004395926440203631
sim_render-npc3_mean0.004395926440203631
sim_render-npc3_median0.004395926440203631
sim_render-npc3_min0.004395926440203631
simulation-passed1
step_physics_max0.3337225251727634
step_physics_mean0.3337225251727634
step_physics_median0.3337225251727634
step_physics_min0.3337225251727634
survival_time_max5.349999999999989
survival_time_mean5.349999999999989
survival_time_min5.349999999999989
No reset possible
7257913634Raphael Jeanmobile-segmentationaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.94999999999998
in-drivable-lane_median0.0
driven_lanedir_consec_median3.076199244080904
deviation-center-line_median0.36636007694166806


other stats
agent_compute-ego0_max0.017563928663730622
agent_compute-ego0_mean0.017563928663730622
agent_compute-ego0_median0.017563928663730622
agent_compute-ego0_min0.017563928663730622
agent_compute-npc0_max0.035440127551555636
agent_compute-npc0_mean0.035440127551555636
agent_compute-npc0_median0.035440127551555636
agent_compute-npc0_min0.035440127551555636
agent_compute-npc1_max0.03636737763881683
agent_compute-npc1_mean0.03636737763881683
agent_compute-npc1_median0.03636737763881683
agent_compute-npc1_min0.03636737763881683
agent_compute-npc2_max0.03570288568735123
agent_compute-npc2_mean0.03570288568735123
agent_compute-npc2_median0.03570288568735123
agent_compute-npc2_min0.03570288568735123
complete-iteration_max0.5264641925692558
complete-iteration_mean0.5264641925692558
complete-iteration_median0.5264641925692558
complete-iteration_min0.5264641925692558
deviation-center-line_max0.36636007694166806
deviation-center-line_mean0.36636007694166806
deviation-center-line_min0.36636007694166806
deviation-heading_max1.5405621669895946
deviation-heading_mean1.5405621669895946
deviation-heading_median1.5405621669895946
deviation-heading_min1.5405621669895946
distance-from-start_max2.512232328379946
distance-from-start_mean2.512232328379946
distance-from-start_median2.512232328379946
distance-from-start_min2.512232328379946
driven_any_max3.1526109347314053
driven_any_mean3.1526109347314053
driven_any_median3.1526109347314053
driven_any_min3.1526109347314053
driven_lanedir_consec_max3.076199244080904
driven_lanedir_consec_mean3.076199244080904
driven_lanedir_consec_min3.076199244080904
driven_lanedir_max3.076199244080904
driven_lanedir_mean3.076199244080904
driven_lanedir_median3.076199244080904
driven_lanedir_min3.076199244080904
get_duckie_state_max1.8358230590820313e-06
get_duckie_state_mean1.8358230590820313e-06
get_duckie_state_median1.8358230590820313e-06
get_duckie_state_min1.8358230590820313e-06
get_robot_state_max0.01586225926876068
get_robot_state_mean0.01586225926876068
get_robot_state_median0.01586225926876068
get_robot_state_min0.01586225926876068
get_state_dump_max0.010630130767822266
get_state_dump_mean0.010630130767822266
get_state_dump_median0.010630130767822266
get_state_dump_min0.010630130767822266
get_ui_image_max0.03053531050682068
get_ui_image_mean0.03053531050682068
get_ui_image_median0.03053531050682068
get_ui_image_min0.03053531050682068
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 3.1526109347314053, "get_ui_image": 0.03053531050682068, "step_physics": 0.2832300841808319, "survival_time": 7.94999999999998, "driven_lanedir": 3.076199244080904, "get_state_dump": 0.010630130767822266, "get_robot_state": 0.01586225926876068, "sim_render-ego0": 0.003961886465549469, "sim_render-npc0": 0.004661011695861817, "sim_render-npc1": 0.004273664951324463, "sim_render-npc2": 0.004182638227939605, "get_duckie_state": 1.8358230590820313e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.5405621669895946, "agent_compute-ego0": 0.017563928663730622, "agent_compute-npc0": 0.035440127551555636, "agent_compute-npc1": 0.03636737763881683, "agent_compute-npc2": 0.03570288568735123, "complete-iteration": 0.5264641925692558, "set_robot_commands": 0.0026443839073181153, "distance-from-start": 2.512232328379946, "deviation-center-line": 0.36636007694166806, "driven_lanedir_consec": 3.076199244080904, "sim_compute_sim_state": 0.02419560104608536, "sim_compute_performance-ego0": 0.0022222548723220825, "sim_compute_performance-npc0": 0.002231124043464661, "sim_compute_performance-npc1": 0.0023266449570655823, "sim_compute_performance-npc2": 0.002290375530719757}}
set_robot_commands_max0.0026443839073181153
set_robot_commands_mean0.0026443839073181153
set_robot_commands_median0.0026443839073181153
set_robot_commands_min0.0026443839073181153
sim_compute_performance-ego0_max0.0022222548723220825
sim_compute_performance-ego0_mean0.0022222548723220825
sim_compute_performance-ego0_median0.0022222548723220825
sim_compute_performance-ego0_min0.0022222548723220825
sim_compute_performance-npc0_max0.002231124043464661
sim_compute_performance-npc0_mean0.002231124043464661
sim_compute_performance-npc0_median0.002231124043464661
sim_compute_performance-npc0_min0.002231124043464661
sim_compute_performance-npc1_max0.0023266449570655823
sim_compute_performance-npc1_mean0.0023266449570655823
sim_compute_performance-npc1_median0.0023266449570655823
sim_compute_performance-npc1_min0.0023266449570655823
sim_compute_performance-npc2_max0.002290375530719757
sim_compute_performance-npc2_mean0.002290375530719757
sim_compute_performance-npc2_median0.002290375530719757
sim_compute_performance-npc2_min0.002290375530719757
sim_compute_sim_state_max0.02419560104608536
sim_compute_sim_state_mean0.02419560104608536
sim_compute_sim_state_median0.02419560104608536
sim_compute_sim_state_min0.02419560104608536
sim_render-ego0_max0.003961886465549469
sim_render-ego0_mean0.003961886465549469
sim_render-ego0_median0.003961886465549469
sim_render-ego0_min0.003961886465549469
sim_render-npc0_max0.004661011695861817
sim_render-npc0_mean0.004661011695861817
sim_render-npc0_median0.004661011695861817
sim_render-npc0_min0.004661011695861817
sim_render-npc1_max0.004273664951324463
sim_render-npc1_mean0.004273664951324463
sim_render-npc1_median0.004273664951324463
sim_render-npc1_min0.004273664951324463
sim_render-npc2_max0.004182638227939605
sim_render-npc2_mean0.004182638227939605
sim_render-npc2_median0.004182638227939605
sim_render-npc2_min0.004182638227939605
simulation-passed1
step_physics_max0.2832300841808319
step_physics_mean0.2832300841808319
step_physics_median0.2832300841808319
step_physics_min0.2832300841808319
survival_time_max7.94999999999998
survival_time_mean7.94999999999998
survival_time_min7.94999999999998
No reset possible
7215413607Andras Beresfsf+ilaido-LFVI_multi-sim-validationsim-0of4successnogpu-production-spot-0-010:27:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median59.99999999999873
in-drivable-lane_median0.7000000000000015
driven_lanedir_consec_median18.98970614009544
deviation-center-line_median5.304483148973869


other stats
agent_compute-ego0_max0.020513435089022394
agent_compute-ego0_mean0.020513435089022394
agent_compute-ego0_median0.020513435089022394
agent_compute-ego0_min0.020513435089022394
agent_compute-ego1_max0.0196205634657886
agent_compute-ego1_mean0.0196205634657886
agent_compute-ego1_median0.0196205634657886
agent_compute-ego1_min0.0196205634657886
agent_compute-ego2_max0.018518399239380492
agent_compute-ego2_mean0.018518399239380492
agent_compute-ego2_median0.018518399239380492
agent_compute-ego2_min0.018518399239380492
agent_compute-ego3_max0.018828999092934232
agent_compute-ego3_mean0.018828999092934232
agent_compute-ego3_median0.018828999092934232
agent_compute-ego3_min0.018828999092934232
complete-iteration_max0.562853237869936
complete-iteration_mean0.562853237869936
complete-iteration_median0.562853237869936
complete-iteration_min0.562853237869936
deviation-center-line_max7.268305566842247
deviation-center-line_mean5.677543715712187
deviation-center-line_min4.832902998058767
deviation-heading_max18.422680208164067
deviation-heading_mean10.79831348117402
deviation-heading_median8.800013032739157
deviation-heading_min7.170547651053705
distance-from-start_max3.368943281640222
distance-from-start_mean2.4689092407968034
distance-from-start_median2.6493431323620698
distance-from-start_min1.208007416822851
driven_any_max30.945702612084457
driven_any_mean20.73922685102185
driven_any_median20.826129875223756
driven_any_min10.35894504155543
driven_lanedir_consec_max28.477912697271883
driven_lanedir_consec_mean18.771434787749033
driven_lanedir_consec_min8.628414173533377
driven_lanedir_max30.56308786385711
driven_lanedir_mean20.028350435707942
driven_lanedir_median20.093924109386087
driven_lanedir_min9.36246566020248
get_duckie_state_max1.672900388083986e-06
get_duckie_state_mean1.672900388083986e-06
get_duckie_state_median1.672900388083986e-06
get_duckie_state_min1.672900388083986e-06
get_robot_state_max0.015461638607053732
get_robot_state_mean0.015461638607053732
get_robot_state_median0.015461638607053732
get_robot_state_min0.015461638607053732
get_state_dump_max0.010280601388707348
get_state_dump_mean0.010280601388707348
get_state_dump_median0.010280601388707348
get_state_dump_min0.010280601388707348
get_ui_image_max0.027120213623745653
get_ui_image_mean0.027120213623745653
get_ui_image_median0.027120213623745653
get_ui_image_min0.027120213623745653
in-drivable-lane_max2.200000000000019
in-drivable-lane_mean0.9000000000000055
in-drivable-lane_min0.0
per-episodes
details{"LFVI_multi-norm-udem1-000-ego0": {"driven_any": 10.35894504155543, "get_ui_image": 0.027120213623745653, "step_physics": 0.3685788959388828, "survival_time": 59.99999999999873, "driven_lanedir": 9.36246566020248, "get_state_dump": 0.010280601388707348, "get_robot_state": 0.015461638607053732, "sim_render-ego0": 0.00398917757998299, "sim_render-ego1": 0.003902114698233751, "sim_render-ego2": 0.003939364970871848, "sim_render-ego3": 0.003950616501451631, "get_duckie_state": 1.672900388083986e-06, "in-drivable-lane": 1.400000000000003, "deviation-heading": 10.3662022863958, "agent_compute-ego0": 0.020513435089022394, "agent_compute-ego1": 0.0196205634657886, "agent_compute-ego2": 0.018518399239380492, "agent_compute-ego3": 0.018828999092934232, "complete-iteration": 0.562853237869936, "set_robot_commands": 0.0025451270666448, "distance-from-start": 1.208007416822851, "deviation-center-line": 7.268305566842247, "driven_lanedir_consec": 8.628414173533377, "sim_compute_sim_state": 0.029135275442931773, "sim_compute_performance-ego0": 0.002219218794848897, "sim_compute_performance-ego1": 0.0021500023675897935, "sim_compute_performance-ego2": 0.002161970345007192, "sim_compute_performance-ego3": 0.0021162418203488874}, "LFVI_multi-norm-udem1-000-ego1": {"driven_any": 30.945702612084457, "get_ui_image": 0.027120213623745653, "step_physics": 0.3685788959388828, "survival_time": 59.99999999999873, "driven_lanedir": 30.56308786385711, "get_state_dump": 0.010280601388707348, "get_robot_state": 0.015461638607053732, "sim_render-ego0": 0.00398917757998299, "sim_render-ego1": 0.003902114698233751, "sim_render-ego2": 0.003939364970871848, "sim_render-ego3": 0.003950616501451631, "get_duckie_state": 1.672900388083986e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.170547651053705, "agent_compute-ego0": 0.020513435089022394, "agent_compute-ego1": 0.0196205634657886, "agent_compute-ego2": 0.018518399239380492, "agent_compute-ego3": 0.018828999092934232, "complete-iteration": 0.562853237869936, "set_robot_commands": 0.0025451270666448, "distance-from-start": 3.368943281640222, "deviation-center-line": 4.8694553558708575, "driven_lanedir_consec": 28.38825768908836, "sim_compute_sim_state": 0.029135275442931773, "sim_compute_performance-ego0": 0.002219218794848897, "sim_compute_performance-ego1": 0.0021500023675897935, "sim_compute_performance-ego2": 0.002161970345007192, "sim_compute_performance-ego3": 0.0021162418203488874}, "LFVI_multi-norm-udem1-000-ego2": {"driven_any": 30.83964080463951, "get_ui_image": 0.027120213623745653, "step_physics": 0.3685788959388828, "survival_time": 59.99999999999873, "driven_lanedir": 30.452993956424407, "get_state_dump": 0.010280601388707348, "get_robot_state": 0.015461638607053732, "sim_render-ego0": 0.00398917757998299, "sim_render-ego1": 0.003902114698233751, "sim_render-ego2": 0.003939364970871848, "sim_render-ego3": 0.003950616501451631, "get_duckie_state": 1.672900388083986e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.233823779082514, "agent_compute-ego0": 0.020513435089022394, "agent_compute-ego1": 0.0196205634657886, "agent_compute-ego2": 0.018518399239380492, "agent_compute-ego3": 0.018828999092934232, "complete-iteration": 0.562853237869936, "set_robot_commands": 0.0025451270666448, "distance-from-start": 2.9377952776954244, "deviation-center-line": 4.832902998058767, "driven_lanedir_consec": 28.477912697271883, "sim_compute_sim_state": 0.029135275442931773, "sim_compute_performance-ego0": 0.002219218794848897, "sim_compute_performance-ego1": 0.0021500023675897935, "sim_compute_performance-ego2": 0.002161970345007192, "sim_compute_performance-ego3": 0.0021162418203488874}, "LFVI_multi-norm-udem1-000-ego3": {"driven_any": 10.812618945808, "get_ui_image": 0.027120213623745653, "step_physics": 0.3685788959388828, "survival_time": 59.99999999999873, "driven_lanedir": 9.734854262347763, "get_state_dump": 0.010280601388707348, "get_robot_state": 0.015461638607053732, "sim_render-ego0": 0.00398917757998299, "sim_render-ego1": 0.003902114698233751, "sim_render-ego2": 0.003939364970871848, "sim_render-ego3": 0.003950616501451631, "get_duckie_state": 1.672900388083986e-06, "in-drivable-lane": 2.200000000000019, "deviation-heading": 18.422680208164067, "agent_compute-ego0": 0.020513435089022394, "agent_compute-ego1": 0.0196205634657886, "agent_compute-ego2": 0.018518399239380492, "agent_compute-ego3": 0.018828999092934232, "complete-iteration": 0.562853237869936, "set_robot_commands": 0.0025451270666448, "distance-from-start": 2.3608909870287156, "deviation-center-line": 5.739510942076881, "driven_lanedir_consec": 9.591154591102518, "sim_compute_sim_state": 0.029135275442931773, "sim_compute_performance-ego0": 0.002219218794848897, "sim_compute_performance-ego1": 0.0021500023675897935, "sim_compute_performance-ego2": 0.002161970345007192, "sim_compute_performance-ego3": 0.0021162418203488874}}
set_robot_commands_max0.0025451270666448
set_robot_commands_mean0.0025451270666448
set_robot_commands_median0.0025451270666448
set_robot_commands_min0.0025451270666448
sim_compute_performance-ego0_max0.002219218794848897
sim_compute_performance-ego0_mean0.002219218794848897
sim_compute_performance-ego0_median0.002219218794848897
sim_compute_performance-ego0_min0.002219218794848897
sim_compute_performance-ego1_max0.0021500023675897935
sim_compute_performance-ego1_mean0.0021500023675897935
sim_compute_performance-ego1_median0.0021500023675897935
sim_compute_performance-ego1_min0.0021500023675897935
sim_compute_performance-ego2_max0.002161970345007192
sim_compute_performance-ego2_mean0.002161970345007192
sim_compute_performance-ego2_median0.002161970345007192
sim_compute_performance-ego2_min0.002161970345007192
sim_compute_performance-ego3_max0.0021162418203488874
sim_compute_performance-ego3_mean0.0021162418203488874
sim_compute_performance-ego3_median0.0021162418203488874
sim_compute_performance-ego3_min0.0021162418203488874
sim_compute_sim_state_max0.029135275442931773
sim_compute_sim_state_mean0.029135275442931773
sim_compute_sim_state_median0.029135275442931773
sim_compute_sim_state_min0.029135275442931773
sim_render-ego0_max0.00398917757998299
sim_render-ego0_mean0.00398917757998299
sim_render-ego0_median0.00398917757998299
sim_render-ego0_min0.00398917757998299
sim_render-ego1_max0.003902114698233751
sim_render-ego1_mean0.003902114698233751
sim_render-ego1_median0.003902114698233751
sim_render-ego1_min0.003902114698233751
sim_render-ego2_max0.003939364970871848
sim_render-ego2_mean0.003939364970871848
sim_render-ego2_median0.003939364970871848
sim_render-ego2_min0.003939364970871848
sim_render-ego3_max0.003950616501451631
sim_render-ego3_mean0.003950616501451631
sim_render-ego3_median0.003950616501451631
sim_render-ego3_min0.003950616501451631
simulation-passed1
step_physics_max0.3685788959388828
step_physics_mean0.3685788959388828
step_physics_median0.3685788959388828
step_physics_min0.3685788959388828
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7208713607Andras Beresfsf+ilaido-LFVI_multi-sim-validationsim-1of4successnogpu-production-spot-0-010:08:57
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median16.350000000000097
in-drivable-lane_median0.0
driven_lanedir_consec_median0.9674411617925172
deviation-center-line_median1.946681229477245


other stats
agent_compute-ego0_max0.023039625185291943
agent_compute-ego0_mean0.023039625185291943
agent_compute-ego0_median0.023039625185291943
agent_compute-ego0_min0.023039625185291943
agent_compute-ego1_max0.02101996686400437
agent_compute-ego1_mean0.02101996686400437
agent_compute-ego1_median0.02101996686400437
agent_compute-ego1_min0.02101996686400437
agent_compute-ego2_max0.02143220785187512
agent_compute-ego2_mean0.02143220785187512
agent_compute-ego2_median0.02143220785187512
agent_compute-ego2_min0.02143220785187512
agent_compute-ego3_max0.02013728240641152
agent_compute-ego3_mean0.02013728240641152
agent_compute-ego3_median0.02013728240641152
agent_compute-ego3_min0.02013728240641152
complete-iteration_max0.5404205060586696
complete-iteration_mean0.5404205060586696
complete-iteration_median0.5404205060586696
complete-iteration_min0.5404205060586696
deviation-center-line_max2.13903090835517
deviation-center-line_mean1.7524531481758876
deviation-center-line_min0.9774192253938904
deviation-heading_max6.001532719686196
deviation-heading_mean4.3676357848102825
deviation-heading_median4.270980851486137
deviation-heading_min2.9270487165826595
distance-from-start_max2.6125256151347096
distance-from-start_mean1.2358441033165737
distance-from-start_median0.8499341696025298
distance-from-start_min0.6309824589265258
driven_any_max7.494667005923533
driven_any_mean2.584338763258444
driven_any_median1.1030551774210005
driven_any_min0.6365776922682437
driven_lanedir_consec_max6.299624255567989
driven_lanedir_consec_mean2.2147102461951143
driven_lanedir_consec_min0.6243344056274325
driven_lanedir_max6.857890015763682
driven_lanedir_mean2.4035404430718197
driven_lanedir_median1.0659686754480813
driven_lanedir_min0.6243344056274325
get_duckie_state_max1.8848151695437547e-06
get_duckie_state_mean1.8848151695437547e-06
get_duckie_state_median1.8848151695437547e-06
get_duckie_state_min1.8848151695437547e-06
get_robot_state_max0.017354258676854577
get_robot_state_mean0.017354258676854577
get_robot_state_median0.017354258676854577
get_robot_state_min0.017354258676854577
get_state_dump_max0.01152078480255313
get_state_dump_mean0.01152078480255313
get_state_dump_median0.01152078480255313
get_state_dump_min0.01152078480255313
get_ui_image_max0.030623647497921454
get_ui_image_mean0.030623647497921454
get_ui_image_median0.030623647497921454
get_ui_image_min0.030623647497921454
in-drivable-lane_max1.0500000000000052
in-drivable-lane_mean0.2625000000000013
in-drivable-lane_min0.0
per-episodes
details{"LFVI_multi-norm-4way-000-ego0": {"driven_any": 7.494667005923533, "get_ui_image": 0.030623647497921454, "step_physics": 0.33777717410064323, "survival_time": 16.350000000000097, "driven_lanedir": 6.857890015763682, "get_state_dump": 0.01152078480255313, "get_robot_state": 0.017354258676854577, "sim_render-ego0": 0.004387150450450618, "sim_render-ego1": 0.004424697742229555, "sim_render-ego2": 0.0044258629403463225, "sim_render-ego3": 0.004291272744899843, "get_duckie_state": 1.8848151695437547e-06, "in-drivable-lane": 1.0500000000000052, "deviation-heading": 2.9270487165826595, "agent_compute-ego0": 0.023039625185291943, "agent_compute-ego1": 0.02101996686400437, "agent_compute-ego2": 0.02143220785187512, "agent_compute-ego3": 0.02013728240641152, "complete-iteration": 0.5404205060586696, "set_robot_commands": 0.002898863902906092, "distance-from-start": 2.6125256151347096, "deviation-center-line": 0.9774192253938904, "driven_lanedir_consec": 6.299624255567989, "sim_compute_sim_state": 0.01846433994246692, "sim_compute_performance-ego0": 0.002397785099541269, "sim_compute_performance-ego1": 0.002410598644396154, "sim_compute_performance-ego2": 0.0024386048316955566, "sim_compute_performance-ego3": 0.0024008816335259415}, "LFVI_multi-norm-4way-000-ego1": {"driven_any": 0.6599251395441524, "get_ui_image": 0.030623647497921454, "step_physics": 0.33777717410064323, "survival_time": 16.350000000000097, "driven_lanedir": 0.643812987097192, "get_state_dump": 0.01152078480255313, "get_robot_state": 0.017354258676854577, "sim_render-ego0": 0.004387150450450618, "sim_render-ego1": 0.004424697742229555, "sim_render-ego2": 0.0044258629403463225, "sim_render-ego3": 0.004291272744899843, "get_duckie_state": 1.8848151695437547e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.2640206913939, "agent_compute-ego0": 0.023039625185291943, "agent_compute-ego1": 0.02101996686400437, "agent_compute-ego2": 0.02143220785187512, "agent_compute-ego3": 0.02013728240641152, "complete-iteration": 0.5404205060586696, "set_robot_commands": 0.002898863902906092, "distance-from-start": 0.65239895059817, "deviation-center-line": 2.13903090835517, "driven_lanedir_consec": 0.643812987097192, "sim_compute_sim_state": 0.01846433994246692, "sim_compute_performance-ego0": 0.002397785099541269, "sim_compute_performance-ego1": 0.002410598644396154, "sim_compute_performance-ego2": 0.0024386048316955566, "sim_compute_performance-ego3": 0.0024008816335259415}, "LFVI_multi-norm-4way-000-ego2": {"driven_any": 1.5461852152978486, "get_ui_image": 0.030623647497921454, "step_physics": 0.33777717410064323, "survival_time": 16.350000000000097, "driven_lanedir": 1.4881243637989707, "get_state_dump": 0.01152078480255313, "get_robot_state": 0.017354258676854577, "sim_render-ego0": 0.004387150450450618, "sim_render-ego1": 0.004424697742229555, "sim_render-ego2": 0.0044258629403463225, "sim_render-ego3": 0.004291272744899843, "get_duckie_state": 1.8848151695437547e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.2779410115783754, "agent_compute-ego0": 0.023039625185291943, "agent_compute-ego1": 0.02101996686400437, "agent_compute-ego2": 0.02143220785187512, "agent_compute-ego3": 0.02013728240641152, "complete-iteration": 0.5404205060586696, "set_robot_commands": 0.002898863902906092, "distance-from-start": 1.0474693886068895, "deviation-center-line": 1.9722605874230732, "driven_lanedir_consec": 1.2910693364878425, "sim_compute_sim_state": 0.01846433994246692, "sim_compute_performance-ego0": 0.002397785099541269, "sim_compute_performance-ego1": 0.002410598644396154, "sim_compute_performance-ego2": 0.0024386048316955566, "sim_compute_performance-ego3": 0.0024008816335259415}, "LFVI_multi-norm-4way-000-ego3": {"driven_any": 0.6365776922682437, "get_ui_image": 0.030623647497921454, "step_physics": 0.33777717410064323, "survival_time": 16.350000000000097, "driven_lanedir": 0.6243344056274325, "get_state_dump": 0.01152078480255313, "get_robot_state": 0.017354258676854577, "sim_render-ego0": 0.004387150450450618, "sim_render-ego1": 0.004424697742229555, "sim_render-ego2": 0.0044258629403463225, "sim_render-ego3": 0.004291272744899843, "get_duckie_state": 1.8848151695437547e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.001532719686196, "agent_compute-ego0": 0.023039625185291943, "agent_compute-ego1": 0.02101996686400437, "agent_compute-ego2": 0.02143220785187512, "agent_compute-ego3": 0.02013728240641152, "complete-iteration": 0.5404205060586696, "set_robot_commands": 0.002898863902906092, "distance-from-start": 0.6309824589265258, "deviation-center-line": 1.9211018715314172, "driven_lanedir_consec": 0.6243344056274325, "sim_compute_sim_state": 0.01846433994246692, "sim_compute_performance-ego0": 0.002397785099541269, "sim_compute_performance-ego1": 0.002410598644396154, "sim_compute_performance-ego2": 0.0024386048316955566, "sim_compute_performance-ego3": 0.0024008816335259415}}
set_robot_commands_max0.002898863902906092
set_robot_commands_mean0.002898863902906092
set_robot_commands_median0.002898863902906092
set_robot_commands_min0.002898863902906092
sim_compute_performance-ego0_max0.002397785099541269
sim_compute_performance-ego0_mean0.002397785099541269
sim_compute_performance-ego0_median0.002397785099541269
sim_compute_performance-ego0_min0.002397785099541269
sim_compute_performance-ego1_max0.002410598644396154
sim_compute_performance-ego1_mean0.002410598644396154
sim_compute_performance-ego1_median0.002410598644396154
sim_compute_performance-ego1_min0.002410598644396154
sim_compute_performance-ego2_max0.0024386048316955566
sim_compute_performance-ego2_mean0.0024386048316955566
sim_compute_performance-ego2_median0.0024386048316955566
sim_compute_performance-ego2_min0.0024386048316955566
sim_compute_performance-ego3_max0.0024008816335259415
sim_compute_performance-ego3_mean0.0024008816335259415
sim_compute_performance-ego3_median0.0024008816335259415
sim_compute_performance-ego3_min0.0024008816335259415
sim_compute_sim_state_max0.01846433994246692
sim_compute_sim_state_mean0.01846433994246692
sim_compute_sim_state_median0.01846433994246692
sim_compute_sim_state_min0.01846433994246692
sim_render-ego0_max0.004387150450450618
sim_render-ego0_mean0.004387150450450618
sim_render-ego0_median0.004387150450450618
sim_render-ego0_min0.004387150450450618
sim_render-ego1_max0.004424697742229555
sim_render-ego1_mean0.004424697742229555
sim_render-ego1_median0.004424697742229555
sim_render-ego1_min0.004424697742229555
sim_render-ego2_max0.0044258629403463225
sim_render-ego2_mean0.0044258629403463225
sim_render-ego2_median0.0044258629403463225
sim_render-ego2_min0.0044258629403463225
sim_render-ego3_max0.004291272744899843
sim_render-ego3_mean0.004291272744899843
sim_render-ego3_median0.004291272744899843
sim_render-ego3_min0.004291272744899843
simulation-passed1
step_physics_max0.33777717410064323
step_physics_mean0.33777717410064323
step_physics_median0.33777717410064323
step_physics_min0.33777717410064323
survival_time_max16.350000000000097
survival_time_mean16.350000000000097
survival_time_min16.350000000000097
No reset possible
7207213649Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.549999999999981
in-drivable-lane_median0.0
driven_lanedir_consec_median2.7567434825361374
deviation-center-line_median0.5572727202531719


other stats
agent_compute-ego0_max0.0178974145337155
agent_compute-ego0_mean0.0178974145337155
agent_compute-ego0_median0.0178974145337155
agent_compute-ego0_min0.0178974145337155
agent_compute-npc0_max0.036078335423218574
agent_compute-npc0_mean0.036078335423218574
agent_compute-npc0_median0.036078335423218574
agent_compute-npc0_min0.036078335423218574
agent_compute-npc1_max0.03663906141331321
agent_compute-npc1_mean0.03663906141331321
agent_compute-npc1_median0.03663906141331321
agent_compute-npc1_min0.03663906141331321
agent_compute-npc2_max0.03642632302485014
agent_compute-npc2_mean0.03642632302485014
agent_compute-npc2_median0.03642632302485014
agent_compute-npc2_min0.03642632302485014
complete-iteration_max0.4959768841141149
complete-iteration_mean0.4959768841141149
complete-iteration_median0.4959768841141149
complete-iteration_min0.4959768841141149
deviation-center-line_max0.5572727202531719
deviation-center-line_mean0.5572727202531719
deviation-center-line_min0.5572727202531719
deviation-heading_max1.8487826912702032
deviation-heading_mean1.8487826912702032
deviation-heading_median1.8487826912702032
deviation-heading_min1.8487826912702032
distance-from-start_max2.4363336942586167
distance-from-start_mean2.4363336942586167
distance-from-start_median2.4363336942586167
distance-from-start_min2.4363336942586167
driven_any_max2.866938265683754
driven_any_mean2.866938265683754
driven_any_median2.866938265683754
driven_any_min2.866938265683754
driven_lanedir_consec_max2.7567434825361374
driven_lanedir_consec_mean2.7567434825361374
driven_lanedir_consec_min2.7567434825361374
driven_lanedir_max2.7567434825361374
driven_lanedir_mean2.7567434825361374
driven_lanedir_median2.7567434825361374
driven_lanedir_min2.7567434825361374
get_duckie_state_max1.857155247738487e-06
get_duckie_state_mean1.857155247738487e-06
get_duckie_state_median1.857155247738487e-06
get_duckie_state_min1.857155247738487e-06
get_robot_state_max0.01510579178207799
get_robot_state_mean0.01510579178207799
get_robot_state_median0.01510579178207799
get_robot_state_min0.01510579178207799
get_state_dump_max0.01056926030861704
get_state_dump_mean0.01056926030861704
get_state_dump_median0.01056926030861704
get_state_dump_min0.01056926030861704
get_ui_image_max0.028470764034672788
get_ui_image_mean0.028470764034672788
get_ui_image_median0.028470764034672788
get_ui_image_min0.028470764034672788
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 2.866938265683754, "get_ui_image": 0.028470764034672788, "step_physics": 0.2556300727944625, "survival_time": 7.549999999999981, "driven_lanedir": 2.7567434825361374, "get_state_dump": 0.01056926030861704, "get_robot_state": 0.01510579178207799, "sim_render-ego0": 0.003941052838375694, "sim_render-npc0": 0.004410260602047569, "sim_render-npc1": 0.004252159281780845, "sim_render-npc2": 0.004106352203770688, "get_duckie_state": 1.857155247738487e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.8487826912702032, "agent_compute-ego0": 0.0178974145337155, "agent_compute-npc0": 0.036078335423218574, "agent_compute-npc1": 0.03663906141331321, "agent_compute-npc2": 0.03642632302485014, "complete-iteration": 0.4959768841141149, "set_robot_commands": 0.0025956787561115468, "distance-from-start": 2.4363336942586167, "deviation-center-line": 0.5572727202531719, "driven_lanedir_consec": 2.7567434825361374, "sim_compute_sim_state": 0.02304726525356895, "sim_compute_performance-ego0": 0.002186596393585205, "sim_compute_performance-npc0": 0.002138434272063406, "sim_compute_performance-npc1": 0.0022998765895241185, "sim_compute_performance-npc2": 0.0022540076782828883}}
set_robot_commands_max0.0025956787561115468
set_robot_commands_mean0.0025956787561115468
set_robot_commands_median0.0025956787561115468
set_robot_commands_min0.0025956787561115468
sim_compute_performance-ego0_max0.002186596393585205
sim_compute_performance-ego0_mean0.002186596393585205
sim_compute_performance-ego0_median0.002186596393585205
sim_compute_performance-ego0_min0.002186596393585205
sim_compute_performance-npc0_max0.002138434272063406
sim_compute_performance-npc0_mean0.002138434272063406
sim_compute_performance-npc0_median0.002138434272063406
sim_compute_performance-npc0_min0.002138434272063406
sim_compute_performance-npc1_max0.0022998765895241185
sim_compute_performance-npc1_mean0.0022998765895241185
sim_compute_performance-npc1_median0.0022998765895241185
sim_compute_performance-npc1_min0.0022998765895241185
sim_compute_performance-npc2_max0.0022540076782828883
sim_compute_performance-npc2_mean0.0022540076782828883
sim_compute_performance-npc2_median0.0022540076782828883
sim_compute_performance-npc2_min0.0022540076782828883
sim_compute_sim_state_max0.02304726525356895
sim_compute_sim_state_mean0.02304726525356895
sim_compute_sim_state_median0.02304726525356895
sim_compute_sim_state_min0.02304726525356895
sim_render-ego0_max0.003941052838375694
sim_render-ego0_mean0.003941052838375694
sim_render-ego0_median0.003941052838375694
sim_render-ego0_min0.003941052838375694
sim_render-npc0_max0.004410260602047569
sim_render-npc0_mean0.004410260602047569
sim_render-npc0_median0.004410260602047569
sim_render-npc0_min0.004410260602047569
sim_render-npc1_max0.004252159281780845
sim_render-npc1_mean0.004252159281780845
sim_render-npc1_median0.004252159281780845
sim_render-npc1_min0.004252159281780845
sim_render-npc2_max0.004106352203770688
sim_render-npc2_mean0.004106352203770688
sim_render-npc2_median0.004106352203770688
sim_render-npc2_min0.004106352203770688
simulation-passed1
step_physics_max0.2556300727944625
step_physics_mean0.2556300727944625
step_physics_median0.2556300727944625
step_physics_min0.2556300727944625
survival_time_max7.549999999999981
survival_time_mean7.549999999999981
survival_time_min7.549999999999981
No reset possible
7201513625Raphael Jeanmobile-segmentationaido-LF-sim-testingsim-0of4successnogpu-production-spot-0-010:07:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median15.79095435505635
survival_time_median44.44999999999961
deviation-center-line_median1.673420454647352
in-drivable-lane_median7.999999999999805


other stats
agent_compute-ego0_max0.019848966598510744
agent_compute-ego0_mean0.019848966598510744
agent_compute-ego0_median0.019848966598510744
agent_compute-ego0_min0.019848966598510744
complete-iteration_max0.23216077209858413
complete-iteration_mean0.23216077209858413
complete-iteration_median0.23216077209858413
complete-iteration_min0.23216077209858413
deviation-center-line_max1.673420454647352
deviation-center-line_mean1.673420454647352
deviation-center-line_min1.673420454647352
deviation-heading_max7.702327085872287
deviation-heading_mean7.702327085872287
deviation-heading_median7.702327085872287
deviation-heading_min7.702327085872287
distance-from-start_max3.2839946585846596
distance-from-start_mean3.2839946585846596
distance-from-start_median3.2839946585846596
distance-from-start_min3.2839946585846596
driven_any_max19.814288648982817
driven_any_mean19.814288648982817
driven_any_median19.814288648982817
driven_any_min19.814288648982817
driven_lanedir_consec_max15.79095435505635
driven_lanedir_consec_mean15.79095435505635
driven_lanedir_consec_min15.79095435505635
driven_lanedir_max15.79095435505635
driven_lanedir_mean15.79095435505635
driven_lanedir_median15.79095435505635
driven_lanedir_min15.79095435505635
get_duckie_state_max1.767780003922709e-06
get_duckie_state_mean1.767780003922709e-06
get_duckie_state_median1.767780003922709e-06
get_duckie_state_min1.767780003922709e-06
get_robot_state_max0.004373540503255437
get_robot_state_mean0.004373540503255437
get_robot_state_median0.004373540503255437
get_robot_state_min0.004373540503255437
get_state_dump_max0.005391237708959686
get_state_dump_mean0.005391237708959686
get_state_dump_median0.005391237708959686
get_state_dump_min0.005391237708959686
get_ui_image_max0.026853693469186847
get_ui_image_mean0.026853693469186847
get_ui_image_median0.026853693469186847
get_ui_image_min0.026853693469186847
in-drivable-lane_max7.999999999999805
in-drivable-lane_mean7.999999999999805
in-drivable-lane_min7.999999999999805
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 19.814288648982817, "get_ui_image": 0.026853693469186847, "step_physics": 0.15686500340365292, "survival_time": 44.44999999999961, "driven_lanedir": 15.79095435505635, "get_state_dump": 0.005391237708959686, "get_robot_state": 0.004373540503255437, "sim_render-ego0": 0.0042652143521255325, "get_duckie_state": 1.767780003922709e-06, "in-drivable-lane": 7.999999999999805, "deviation-heading": 7.702327085872287, "agent_compute-ego0": 0.019848966598510744, "complete-iteration": 0.23216077209858413, "set_robot_commands": 0.002720470374889588, "distance-from-start": 3.2839946585846596, "deviation-center-line": 1.673420454647352, "driven_lanedir_consec": 15.79095435505635, "sim_compute_sim_state": 0.00936122824636738, "sim_compute_performance-ego0": 0.0023657332645373396}}
set_robot_commands_max0.002720470374889588
set_robot_commands_mean0.002720470374889588
set_robot_commands_median0.002720470374889588
set_robot_commands_min0.002720470374889588
sim_compute_performance-ego0_max0.0023657332645373396
sim_compute_performance-ego0_mean0.0023657332645373396
sim_compute_performance-ego0_median0.0023657332645373396
sim_compute_performance-ego0_min0.0023657332645373396
sim_compute_sim_state_max0.00936122824636738
sim_compute_sim_state_mean0.00936122824636738
sim_compute_sim_state_median0.00936122824636738
sim_compute_sim_state_min0.00936122824636738
sim_render-ego0_max0.0042652143521255325
sim_render-ego0_mean0.0042652143521255325
sim_render-ego0_median0.0042652143521255325
sim_render-ego0_min0.0042652143521255325
simulation-passed1
step_physics_max0.15686500340365292
step_physics_mean0.15686500340365292
step_physics_median0.15686500340365292
step_physics_min0.15686500340365292
survival_time_max44.44999999999961
survival_time_mean44.44999999999961
survival_time_min44.44999999999961
No reset possible
7190713649Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFV-sim-validationsim-3of4successnogpu-production-spot-0-010:08:06
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median17.450000000000113
in-drivable-lane_median0.6499999999999977
driven_lanedir_consec_median6.679716152315404
deviation-center-line_median0.9053335033920245


other stats
agent_compute-ego0_max0.018878483772277833
agent_compute-ego0_mean0.018878483772277833
agent_compute-ego0_median0.018878483772277833
agent_compute-ego0_min0.018878483772277833
agent_compute-npc0_max0.04177905491420201
agent_compute-npc0_mean0.04177905491420201
agent_compute-npc0_median0.04177905491420201
agent_compute-npc0_min0.04177905491420201
agent_compute-npc1_max0.04840599468776158
agent_compute-npc1_mean0.04840599468776158
agent_compute-npc1_median0.04840599468776158
agent_compute-npc1_min0.04840599468776158
agent_compute-npc2_max0.04768100193568638
agent_compute-npc2_mean0.04768100193568638
agent_compute-npc2_median0.04768100193568638
agent_compute-npc2_min0.04768100193568638
agent_compute-npc3_max0.046822400093078614
agent_compute-npc3_mean0.046822400093078614
agent_compute-npc3_median0.046822400093078614
agent_compute-npc3_min0.046822400093078614
complete-iteration_max0.7075882346289498
complete-iteration_mean0.7075882346289498
complete-iteration_median0.7075882346289498
complete-iteration_min0.7075882346289498
deviation-center-line_max0.9053335033920245
deviation-center-line_mean0.9053335033920245
deviation-center-line_min0.9053335033920245
deviation-heading_max3.589631625067123
deviation-heading_mean3.589631625067123
deviation-heading_median3.589631625067123
deviation-heading_min3.589631625067123
distance-from-start_max3.941649048918415
distance-from-start_mean3.941649048918415
distance-from-start_median3.941649048918415
distance-from-start_min3.941649048918415
driven_any_max7.164101530398068
driven_any_mean7.164101530398068
driven_any_median7.164101530398068
driven_any_min7.164101530398068
driven_lanedir_consec_max6.679716152315404
driven_lanedir_consec_mean6.679716152315404
driven_lanedir_consec_min6.679716152315404
driven_lanedir_max6.679716152315404
driven_lanedir_mean6.679716152315404
driven_lanedir_median6.679716152315404
driven_lanedir_min6.679716152315404
get_duckie_state_max1.6396386282784598e-06
get_duckie_state_mean1.6396386282784598e-06
get_duckie_state_median1.6396386282784598e-06
get_duckie_state_min1.6396386282784598e-06
get_robot_state_max0.01926302228655134
get_robot_state_mean0.01926302228655134
get_robot_state_median0.01926302228655134
get_robot_state_min0.01926302228655134
get_state_dump_max0.012036810602460589
get_state_dump_mean0.012036810602460589
get_state_dump_median0.012036810602460589
get_state_dump_min0.012036810602460589
get_ui_image_max0.03332273415156773
get_ui_image_mean0.03332273415156773
get_ui_image_median0.03332273415156773
get_ui_image_min0.03332273415156773
in-drivable-lane_max0.6499999999999977
in-drivable-lane_mean0.6499999999999977
in-drivable-lane_min0.6499999999999977
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 7.164101530398068, "get_ui_image": 0.03332273415156773, "step_physics": 0.34276074409484864, "survival_time": 17.450000000000113, "driven_lanedir": 6.679716152315404, "get_state_dump": 0.012036810602460589, "get_robot_state": 0.01926302228655134, "sim_render-ego0": 0.004048819541931152, "sim_render-npc0": 0.004898018836975097, "sim_render-npc1": 0.004458005768912179, "sim_render-npc2": 0.004224216597420828, "sim_render-npc3": 0.004193016460963658, "get_duckie_state": 1.6396386282784598e-06, "in-drivable-lane": 0.6499999999999977, "deviation-heading": 3.589631625067123, "agent_compute-ego0": 0.018878483772277833, "agent_compute-npc0": 0.04177905491420201, "agent_compute-npc1": 0.04840599468776158, "agent_compute-npc2": 0.04768100193568638, "agent_compute-npc3": 0.046822400093078614, "complete-iteration": 0.7075882346289498, "set_robot_commands": 0.002606464113507952, "distance-from-start": 3.941649048918415, "deviation-center-line": 0.9053335033920245, "driven_lanedir_consec": 6.679716152315404, "sim_compute_sim_state": 0.0496818665095738, "sim_compute_performance-ego0": 0.00230762209211077, "sim_compute_performance-npc0": 0.002304049219403948, "sim_compute_performance-npc1": 0.0023522649492536273, "sim_compute_performance-npc2": 0.0022856146948678152, "sim_compute_performance-npc3": 0.0022143139157976425}}
set_robot_commands_max0.002606464113507952
set_robot_commands_mean0.002606464113507952
set_robot_commands_median0.002606464113507952
set_robot_commands_min0.002606464113507952
sim_compute_performance-ego0_max0.00230762209211077
sim_compute_performance-ego0_mean0.00230762209211077
sim_compute_performance-ego0_median0.00230762209211077
sim_compute_performance-ego0_min0.00230762209211077
sim_compute_performance-npc0_max0.002304049219403948
sim_compute_performance-npc0_mean0.002304049219403948
sim_compute_performance-npc0_median0.002304049219403948
sim_compute_performance-npc0_min0.002304049219403948
sim_compute_performance-npc1_max0.0023522649492536273
sim_compute_performance-npc1_mean0.0023522649492536273
sim_compute_performance-npc1_median0.0023522649492536273
sim_compute_performance-npc1_min0.0023522649492536273
sim_compute_performance-npc2_max0.0022856146948678152
sim_compute_performance-npc2_mean0.0022856146948678152
sim_compute_performance-npc2_median0.0022856146948678152
sim_compute_performance-npc2_min0.0022856146948678152
sim_compute_performance-npc3_max0.0022143139157976425
sim_compute_performance-npc3_mean0.0022143139157976425
sim_compute_performance-npc3_median0.0022143139157976425
sim_compute_performance-npc3_min0.0022143139157976425
sim_compute_sim_state_max0.0496818665095738
sim_compute_sim_state_mean0.0496818665095738
sim_compute_sim_state_median0.0496818665095738
sim_compute_sim_state_min0.0496818665095738
sim_render-ego0_max0.004048819541931152
sim_render-ego0_mean0.004048819541931152
sim_render-ego0_median0.004048819541931152
sim_render-ego0_min0.004048819541931152
sim_render-npc0_max0.004898018836975097
sim_render-npc0_mean0.004898018836975097
sim_render-npc0_median0.004898018836975097
sim_render-npc0_min0.004898018836975097
sim_render-npc1_max0.004458005768912179
sim_render-npc1_mean0.004458005768912179
sim_render-npc1_median0.004458005768912179
sim_render-npc1_min0.004458005768912179
sim_render-npc2_max0.004224216597420828
sim_render-npc2_mean0.004224216597420828
sim_render-npc2_median0.004224216597420828
sim_render-npc2_min0.004224216597420828
sim_render-npc3_max0.004193016460963658
sim_render-npc3_mean0.004193016460963658
sim_render-npc3_median0.004193016460963658
sim_render-npc3_min0.004193016460963658
simulation-passed1
step_physics_max0.34276074409484864
step_physics_mean0.34276074409484864
step_physics_median0.34276074409484864
step_physics_min0.34276074409484864
survival_time_max17.450000000000113
survival_time_mean17.450000000000113
survival_time_min17.450000000000113
No reset possible
7185113649Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.549999999999981
in-drivable-lane_median0.34999999999999876
driven_lanedir_consec_median2.565689227355534
deviation-center-line_median0.43859101709016907


other stats
agent_compute-ego0_max0.017543729982878033
agent_compute-ego0_mean0.017543729982878033
agent_compute-ego0_median0.017543729982878033
agent_compute-ego0_min0.017543729982878033
agent_compute-npc0_max0.03633651294206318
agent_compute-npc0_mean0.03633651294206318
agent_compute-npc0_median0.03633651294206318
agent_compute-npc0_min0.03633651294206318
agent_compute-npc1_max0.036053861442365144
agent_compute-npc1_mean0.036053861442365144
agent_compute-npc1_median0.036053861442365144
agent_compute-npc1_min0.036053861442365144
agent_compute-npc2_max0.035907789280540066
agent_compute-npc2_mean0.035907789280540066
agent_compute-npc2_median0.035907789280540066
agent_compute-npc2_min0.035907789280540066
complete-iteration_max0.49259427189826965
complete-iteration_mean0.49259427189826965
complete-iteration_median0.49259427189826965
complete-iteration_min0.49259427189826965
deviation-center-line_max0.43859101709016907
deviation-center-line_mean0.43859101709016907
deviation-center-line_min0.43859101709016907
deviation-heading_max1.9637161424607572
deviation-heading_mean1.9637161424607572
deviation-heading_median1.9637161424607572
deviation-heading_min1.9637161424607572
distance-from-start_max2.4944702352813737
distance-from-start_mean2.4944702352813737
distance-from-start_median2.4944702352813737
distance-from-start_min2.4944702352813737
driven_any_max2.868160263361808
driven_any_mean2.868160263361808
driven_any_median2.868160263361808
driven_any_min2.868160263361808
driven_lanedir_consec_max2.565689227355534
driven_lanedir_consec_mean2.565689227355534
driven_lanedir_consec_min2.565689227355534
driven_lanedir_max2.565689227355534
driven_lanedir_mean2.565689227355534
driven_lanedir_median2.565689227355534
driven_lanedir_min2.565689227355534
get_duckie_state_max1.6830469432630037e-06
get_duckie_state_mean1.6830469432630037e-06
get_duckie_state_median1.6830469432630037e-06
get_duckie_state_min1.6830469432630037e-06
get_robot_state_max0.015103503277427271
get_robot_state_mean0.015103503277427271
get_robot_state_median0.015103503277427271
get_robot_state_min0.015103503277427271
get_state_dump_max0.010120528308968795
get_state_dump_mean0.010120528308968795
get_state_dump_median0.010120528308968795
get_state_dump_min0.010120528308968795
get_ui_image_max0.028378081949133625
get_ui_image_mean0.028378081949133625
get_ui_image_median0.028378081949133625
get_ui_image_min0.028378081949133625
in-drivable-lane_max0.34999999999999876
in-drivable-lane_mean0.34999999999999876
in-drivable-lane_min0.34999999999999876
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 2.868160263361808, "get_ui_image": 0.028378081949133625, "step_physics": 0.2535720640107205, "survival_time": 7.549999999999981, "driven_lanedir": 2.565689227355534, "get_state_dump": 0.010120528308968795, "get_robot_state": 0.015103503277427271, "sim_render-ego0": 0.003875489297666048, "sim_render-npc0": 0.004595259302540829, "sim_render-npc1": 0.004222840070724487, "sim_render-npc2": 0.004514330311825401, "get_duckie_state": 1.6830469432630037e-06, "in-drivable-lane": 0.34999999999999876, "deviation-heading": 1.9637161424607572, "agent_compute-ego0": 0.017543729982878033, "agent_compute-npc0": 0.03633651294206318, "agent_compute-npc1": 0.036053861442365144, "agent_compute-npc2": 0.035907789280540066, "complete-iteration": 0.49259427189826965, "set_robot_commands": 0.0025498819978613603, "distance-from-start": 2.4944702352813737, "deviation-center-line": 0.43859101709016907, "driven_lanedir_consec": 2.565689227355534, "sim_compute_sim_state": 0.023153025853006465, "sim_compute_performance-ego0": 0.0021096624826130116, "sim_compute_performance-npc0": 0.002145461345973768, "sim_compute_performance-npc1": 0.002274693627106516, "sim_compute_performance-npc2": 0.0021745688036868445}}
set_robot_commands_max0.0025498819978613603
set_robot_commands_mean0.0025498819978613603
set_robot_commands_median0.0025498819978613603
set_robot_commands_min0.0025498819978613603
sim_compute_performance-ego0_max0.0021096624826130116
sim_compute_performance-ego0_mean0.0021096624826130116
sim_compute_performance-ego0_median0.0021096624826130116
sim_compute_performance-ego0_min0.0021096624826130116
sim_compute_performance-npc0_max0.002145461345973768
sim_compute_performance-npc0_mean0.002145461345973768
sim_compute_performance-npc0_median0.002145461345973768
sim_compute_performance-npc0_min0.002145461345973768
sim_compute_performance-npc1_max0.002274693627106516
sim_compute_performance-npc1_mean0.002274693627106516
sim_compute_performance-npc1_median0.002274693627106516
sim_compute_performance-npc1_min0.002274693627106516
sim_compute_performance-npc2_max0.0021745688036868445
sim_compute_performance-npc2_mean0.0021745688036868445
sim_compute_performance-npc2_median0.0021745688036868445
sim_compute_performance-npc2_min0.0021745688036868445
sim_compute_sim_state_max0.023153025853006465
sim_compute_sim_state_mean0.023153025853006465
sim_compute_sim_state_median0.023153025853006465
sim_compute_sim_state_min0.023153025853006465
sim_render-ego0_max0.003875489297666048
sim_render-ego0_mean0.003875489297666048
sim_render-ego0_median0.003875489297666048
sim_render-ego0_min0.003875489297666048
sim_render-npc0_max0.004595259302540829
sim_render-npc0_mean0.004595259302540829
sim_render-npc0_median0.004595259302540829
sim_render-npc0_min0.004595259302540829
sim_render-npc1_max0.004222840070724487
sim_render-npc1_mean0.004222840070724487
sim_render-npc1_median0.004222840070724487
sim_render-npc1_min0.004222840070724487
sim_render-npc2_max0.004514330311825401
sim_render-npc2_mean0.004514330311825401
sim_render-npc2_median0.004514330311825401
sim_render-npc2_min0.004514330311825401
simulation-passed1
step_physics_max0.2535720640107205
step_physics_mean0.2535720640107205
step_physics_median0.2535720640107205
step_physics_min0.2535720640107205
survival_time_max7.549999999999981
survival_time_mean7.549999999999981
survival_time_min7.549999999999981
No reset possible
7179613649Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.599999999999981
in-drivable-lane_median0.4999999999999982
driven_lanedir_consec_median2.458956036746369
deviation-center-line_median0.4872214854246912


other stats
agent_compute-ego0_max0.01837821879418068
agent_compute-ego0_mean0.01837821879418068
agent_compute-ego0_median0.01837821879418068
agent_compute-ego0_min0.01837821879418068
agent_compute-npc0_max0.03638524946823619
agent_compute-npc0_mean0.03638524946823619
agent_compute-npc0_median0.03638524946823619
agent_compute-npc0_min0.03638524946823619
agent_compute-npc1_max0.03685732292973138
agent_compute-npc1_mean0.03685732292973138
agent_compute-npc1_median0.03685732292973138
agent_compute-npc1_min0.03685732292973138
agent_compute-npc2_max0.037616414961472056
agent_compute-npc2_mean0.037616414961472056
agent_compute-npc2_median0.037616414961472056
agent_compute-npc2_min0.037616414961472056
complete-iteration_max0.5060382201001535
complete-iteration_mean0.5060382201001535
complete-iteration_median0.5060382201001535
complete-iteration_min0.5060382201001535
deviation-center-line_max0.4872214854246912
deviation-center-line_mean0.4872214854246912
deviation-center-line_min0.4872214854246912
deviation-heading_max2.3142937255027127
deviation-heading_mean2.3142937255027127
deviation-heading_median2.3142937255027127
deviation-heading_min2.3142937255027127
distance-from-start_max2.521870744628482
distance-from-start_mean2.521870744628482
distance-from-start_median2.521870744628482
distance-from-start_min2.521870744628482
driven_any_max2.886922705334732
driven_any_mean2.886922705334732
driven_any_median2.886922705334732
driven_any_min2.886922705334732
driven_lanedir_consec_max2.458956036746369
driven_lanedir_consec_mean2.458956036746369
driven_lanedir_consec_min2.458956036746369
driven_lanedir_max2.458956036746369
driven_lanedir_mean2.458956036746369
driven_lanedir_median2.458956036746369
driven_lanedir_min2.458956036746369
get_duckie_state_max2.187841078814338e-06
get_duckie_state_mean2.187841078814338e-06
get_duckie_state_median2.187841078814338e-06
get_duckie_state_min2.187841078814338e-06
get_robot_state_max0.0157166041579901
get_robot_state_mean0.0157166041579901
get_robot_state_median0.0157166041579901
get_robot_state_min0.0157166041579901
get_state_dump_max0.010835127113691344
get_state_dump_mean0.010835127113691344
get_state_dump_median0.010835127113691344
get_state_dump_min0.010835127113691344
get_ui_image_max0.02924223662981021
get_ui_image_mean0.02924223662981021
get_ui_image_median0.02924223662981021
get_ui_image_min0.02924223662981021
in-drivable-lane_max0.4999999999999982
in-drivable-lane_mean0.4999999999999982
in-drivable-lane_min0.4999999999999982
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 2.886922705334732, "get_ui_image": 0.02924223662981021, "step_physics": 0.25949253132140715, "survival_time": 7.599999999999981, "driven_lanedir": 2.458956036746369, "get_state_dump": 0.010835127113691344, "get_robot_state": 0.0157166041579901, "sim_render-ego0": 0.004135142743977067, "sim_render-npc0": 0.004964258156570734, "sim_render-npc1": 0.004444740956125696, "sim_render-npc2": 0.004391321169784646, "get_duckie_state": 2.187841078814338e-06, "in-drivable-lane": 0.4999999999999982, "deviation-heading": 2.3142937255027127, "agent_compute-ego0": 0.01837821879418068, "agent_compute-npc0": 0.03638524946823619, "agent_compute-npc1": 0.03685732292973138, "agent_compute-npc2": 0.037616414961472056, "complete-iteration": 0.5060382201001535, "set_robot_commands": 0.002579313477659537, "distance-from-start": 2.521870744628482, "deviation-center-line": 0.4872214854246912, "driven_lanedir_consec": 2.458956036746369, "sim_compute_sim_state": 0.023643291074466083, "sim_compute_performance-ego0": 0.0021676752302381727, "sim_compute_performance-npc0": 0.0022307610979267196, "sim_compute_performance-npc1": 0.002379387811897627, "sim_compute_performance-npc2": 0.002303969626333199}}
set_robot_commands_max0.002579313477659537
set_robot_commands_mean0.002579313477659537
set_robot_commands_median0.002579313477659537
set_robot_commands_min0.002579313477659537
sim_compute_performance-ego0_max0.0021676752302381727
sim_compute_performance-ego0_mean0.0021676752302381727
sim_compute_performance-ego0_median0.0021676752302381727
sim_compute_performance-ego0_min0.0021676752302381727
sim_compute_performance-npc0_max0.0022307610979267196
sim_compute_performance-npc0_mean0.0022307610979267196
sim_compute_performance-npc0_median0.0022307610979267196
sim_compute_performance-npc0_min0.0022307610979267196
sim_compute_performance-npc1_max0.002379387811897627
sim_compute_performance-npc1_mean0.002379387811897627
sim_compute_performance-npc1_median0.002379387811897627
sim_compute_performance-npc1_min0.002379387811897627
sim_compute_performance-npc2_max0.002303969626333199
sim_compute_performance-npc2_mean0.002303969626333199
sim_compute_performance-npc2_median0.002303969626333199
sim_compute_performance-npc2_min0.002303969626333199
sim_compute_sim_state_max0.023643291074466083
sim_compute_sim_state_mean0.023643291074466083
sim_compute_sim_state_median0.023643291074466083
sim_compute_sim_state_min0.023643291074466083
sim_render-ego0_max0.004135142743977067
sim_render-ego0_mean0.004135142743977067
sim_render-ego0_median0.004135142743977067
sim_render-ego0_min0.004135142743977067
sim_render-npc0_max0.004964258156570734
sim_render-npc0_mean0.004964258156570734
sim_render-npc0_median0.004964258156570734
sim_render-npc0_min0.004964258156570734
sim_render-npc1_max0.004444740956125696
sim_render-npc1_mean0.004444740956125696
sim_render-npc1_median0.004444740956125696
sim_render-npc1_min0.004444740956125696
sim_render-npc2_max0.004391321169784646
sim_render-npc2_mean0.004391321169784646
sim_render-npc2_median0.004391321169784646
sim_render-npc2_min0.004391321169784646
simulation-passed1
step_physics_max0.25949253132140715
step_physics_mean0.25949253132140715
step_physics_median0.25949253132140715
step_physics_min0.25949253132140715
survival_time_max7.599999999999981
survival_time_mean7.599999999999981
survival_time_min7.599999999999981
No reset possible
7175613611Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-validationsim-0of4timeoutnogpu-production-spot-0-01----No reset possible
7170513604Andras Beresfsf+ilaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median7.249999999999982
in-drivable-lane_median0.2499999999999991
driven_lanedir_consec_median2.931928738354392
deviation-center-line_median0.5281485262186425


other stats
agent_compute-ego0_max0.022815813756968877
agent_compute-ego0_mean0.022815813756968877
agent_compute-ego0_median0.022815813756968877
agent_compute-ego0_min0.022815813756968877
agent_compute-npc0_max0.03621283622637187
agent_compute-npc0_mean0.03621283622637187
agent_compute-npc0_median0.03621283622637187
agent_compute-npc0_min0.03621283622637187
agent_compute-npc1_max0.03699023266361184
agent_compute-npc1_mean0.03699023266361184
agent_compute-npc1_median0.03699023266361184
agent_compute-npc1_min0.03699023266361184
agent_compute-npc2_max0.03788675510720031
agent_compute-npc2_mean0.03788675510720031
agent_compute-npc2_median0.03788675510720031
agent_compute-npc2_min0.03788675510720031
complete-iteration_max0.485481956233717
complete-iteration_mean0.485481956233717
complete-iteration_median0.485481956233717
complete-iteration_min0.485481956233717
deviation-center-line_max0.5281485262186425
deviation-center-line_mean0.5281485262186425
deviation-center-line_min0.5281485262186425
deviation-heading_max1.0734895222960503
deviation-heading_mean1.0734895222960503
deviation-heading_median1.0734895222960503
deviation-heading_min1.0734895222960503
distance-from-start_max2.421183296984171
distance-from-start_mean2.421183296984171
distance-from-start_median2.421183296984171
distance-from-start_min2.421183296984171
driven_any_max3.069173987941065
driven_any_mean3.069173987941065
driven_any_median3.069173987941065
driven_any_min3.069173987941065
driven_lanedir_consec_max2.931928738354392
driven_lanedir_consec_mean2.931928738354392
driven_lanedir_consec_min2.931928738354392
driven_lanedir_max2.931928738354392
driven_lanedir_mean2.931928738354392
driven_lanedir_median2.931928738354392
driven_lanedir_min2.931928738354392
get_duckie_state_max2.1800602952094926e-06
get_duckie_state_mean2.1800602952094926e-06
get_duckie_state_median2.1800602952094926e-06
get_duckie_state_min2.1800602952094926e-06
get_robot_state_max0.016646135343264225
get_robot_state_mean0.016646135343264225
get_robot_state_median0.016646135343264225
get_robot_state_min0.016646135343264225
get_state_dump_max0.010703795576748784
get_state_dump_mean0.010703795576748784
get_state_dump_median0.010703795576748784
get_state_dump_min0.010703795576748784
get_ui_image_max0.023388098364006984
get_ui_image_mean0.023388098364006984
get_ui_image_median0.023388098364006984
get_ui_image_min0.023388098364006984
in-drivable-lane_max0.2499999999999991
in-drivable-lane_mean0.2499999999999991
in-drivable-lane_min0.2499999999999991
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 3.069173987941065, "get_ui_image": 0.023388098364006984, "step_physics": 0.2382684439828951, "survival_time": 7.249999999999982, "driven_lanedir": 2.931928738354392, "get_state_dump": 0.010703795576748784, "get_robot_state": 0.016646135343264225, "sim_render-ego0": 0.0041566319661597686, "sim_render-npc0": 0.004045512578258775, "sim_render-npc1": 0.004468628804977626, "sim_render-npc2": 0.004345059394836426, "get_duckie_state": 2.1800602952094926e-06, "in-drivable-lane": 0.2499999999999991, "deviation-heading": 1.0734895222960503, "agent_compute-ego0": 0.022815813756968877, "agent_compute-npc0": 0.03621283622637187, "agent_compute-npc1": 0.03699023266361184, "agent_compute-npc2": 0.03788675510720031, "complete-iteration": 0.485481956233717, "set_robot_commands": 0.0027755956127219006, "distance-from-start": 2.421183296984171, "deviation-center-line": 0.5281485262186425, "driven_lanedir_consec": 2.931928738354392, "sim_compute_sim_state": 0.024811266219779235, "sim_compute_performance-ego0": 0.002365807964377207, "sim_compute_performance-npc0": 0.0022595679923279647, "sim_compute_performance-npc1": 0.00243150044793952, "sim_compute_performance-npc2": 0.002366544449166076}}
set_robot_commands_max0.0027755956127219006
set_robot_commands_mean0.0027755956127219006
set_robot_commands_median0.0027755956127219006
set_robot_commands_min0.0027755956127219006
sim_compute_performance-ego0_max0.002365807964377207
sim_compute_performance-ego0_mean0.002365807964377207
sim_compute_performance-ego0_median0.002365807964377207
sim_compute_performance-ego0_min0.002365807964377207
sim_compute_performance-npc0_max0.0022595679923279647
sim_compute_performance-npc0_mean0.0022595679923279647
sim_compute_performance-npc0_median0.0022595679923279647
sim_compute_performance-npc0_min0.0022595679923279647
sim_compute_performance-npc1_max0.00243150044793952
sim_compute_performance-npc1_mean0.00243150044793952
sim_compute_performance-npc1_median0.00243150044793952
sim_compute_performance-npc1_min0.00243150044793952
sim_compute_performance-npc2_max0.002366544449166076
sim_compute_performance-npc2_mean0.002366544449166076
sim_compute_performance-npc2_median0.002366544449166076
sim_compute_performance-npc2_min0.002366544449166076
sim_compute_sim_state_max0.024811266219779235
sim_compute_sim_state_mean0.024811266219779235
sim_compute_sim_state_median0.024811266219779235
sim_compute_sim_state_min0.024811266219779235
sim_render-ego0_max0.0041566319661597686
sim_render-ego0_mean0.0041566319661597686
sim_render-ego0_median0.0041566319661597686
sim_render-ego0_min0.0041566319661597686
sim_render-npc0_max0.004045512578258775
sim_render-npc0_mean0.004045512578258775
sim_render-npc0_median0.004045512578258775
sim_render-npc0_min0.004045512578258775
sim_render-npc1_max0.004468628804977626
sim_render-npc1_mean0.004468628804977626
sim_render-npc1_median0.004468628804977626
sim_render-npc1_min0.004468628804977626
sim_render-npc2_max0.004345059394836426
sim_render-npc2_mean0.004345059394836426
sim_render-npc2_median0.004345059394836426
sim_render-npc2_min0.004345059394836426
simulation-passed1
step_physics_max0.2382684439828951
step_physics_mean0.2382684439828951
step_physics_median0.2382684439828951
step_physics_min0.2382684439828951
survival_time_max7.249999999999982
survival_time_mean7.249999999999982
survival_time_min7.249999999999982
No reset possible
7162313604Andras Beresfsf+ilaido-LFV-sim-validationsim-3of4successnogpu-production-spot-0-010:07:32
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median16.100000000000094
in-drivable-lane_median0.0
driven_lanedir_consec_median7.431878684371568
deviation-center-line_median1.214546951970359


other stats
agent_compute-ego0_max0.019946026728249185
agent_compute-ego0_mean0.019946026728249185
agent_compute-ego0_median0.019946026728249185
agent_compute-ego0_min0.019946026728249185
agent_compute-npc0_max0.04308310677023495
agent_compute-npc0_mean0.04308310677023495
agent_compute-npc0_median0.04308310677023495
agent_compute-npc0_min0.04308310677023495
agent_compute-npc1_max0.05065893388754074
agent_compute-npc1_mean0.05065893388754074
agent_compute-npc1_median0.05065893388754074
agent_compute-npc1_min0.05065893388754074
agent_compute-npc2_max0.05158334767486289
agent_compute-npc2_mean0.05158334767486289
agent_compute-npc2_median0.05158334767486289
agent_compute-npc2_min0.05158334767486289
agent_compute-npc3_max0.051413133786558735
agent_compute-npc3_mean0.051413133786558735
agent_compute-npc3_median0.051413133786558735
agent_compute-npc3_min0.051413133786558735
complete-iteration_max0.6897205974295413
complete-iteration_mean0.6897205974295413
complete-iteration_median0.6897205974295413
complete-iteration_min0.6897205974295413
deviation-center-line_max1.214546951970359
deviation-center-line_mean1.214546951970359
deviation-center-line_min1.214546951970359
deviation-heading_max2.3955119085333534
deviation-heading_mean2.3955119085333534
deviation-heading_median2.3955119085333534
deviation-heading_min2.3955119085333534
distance-from-start_max4.05732164683369
distance-from-start_mean4.05732164683369
distance-from-start_median4.05732164683369
distance-from-start_min4.05732164683369
driven_any_max7.538640395964277
driven_any_mean7.538640395964277
driven_any_median7.538640395964277
driven_any_min7.538640395964277
driven_lanedir_consec_max7.431878684371568
driven_lanedir_consec_mean7.431878684371568
driven_lanedir_consec_min7.431878684371568
driven_lanedir_max7.431878684371568
driven_lanedir_mean7.431878684371568
driven_lanedir_median7.431878684371568
driven_lanedir_min7.431878684371568
get_duckie_state_max1.937612291460067e-06
get_duckie_state_mean1.937612291460067e-06
get_duckie_state_median1.937612291460067e-06
get_duckie_state_min1.937612291460067e-06
get_robot_state_max0.021254523250709745
get_robot_state_mean0.021254523250709745
get_robot_state_median0.021254523250709745
get_robot_state_min0.021254523250709745
get_state_dump_max0.01306524586751365
get_state_dump_mean0.01306524586751365
get_state_dump_median0.01306524586751365
get_state_dump_min0.01306524586751365
get_ui_image_max0.02856997292108211
get_ui_image_mean0.02856997292108211
get_ui_image_median0.02856997292108211
get_ui_image_min0.02856997292108211
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-techtrack-000-ego0": {"driven_any": 7.538640395964277, "get_ui_image": 0.02856997292108211, "step_physics": 0.3082898128143405, "survival_time": 16.100000000000094, "driven_lanedir": 7.431878684371568, "get_state_dump": 0.01306524586751365, "get_robot_state": 0.021254523250709745, "sim_render-ego0": 0.004337406748957678, "sim_render-npc0": 0.004347937025891, "sim_render-npc1": 0.00454416880297587, "sim_render-npc2": 0.004561093569540018, "sim_render-npc3": 0.004501215813698783, "get_duckie_state": 1.937612291460067e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.3955119085333534, "agent_compute-ego0": 0.019946026728249185, "agent_compute-npc0": 0.04308310677023495, "agent_compute-npc1": 0.05065893388754074, "agent_compute-npc2": 0.05158334767486289, "agent_compute-npc3": 0.051413133786558735, "complete-iteration": 0.6897205974295413, "set_robot_commands": 0.0029220470333985132, "distance-from-start": 4.05732164683369, "deviation-center-line": 1.214546951970359, "driven_lanedir_consec": 7.431878684371568, "sim_compute_sim_state": 0.05255350166060976, "sim_compute_performance-ego0": 0.0025066738896325647, "sim_compute_performance-npc0": 0.0023532076147687693, "sim_compute_performance-npc1": 0.002502387522174847, "sim_compute_performance-npc2": 0.0025246645274915195, "sim_compute_performance-npc3": 0.002452874700351396}}
set_robot_commands_max0.0029220470333985132
set_robot_commands_mean0.0029220470333985132
set_robot_commands_median0.0029220470333985132
set_robot_commands_min0.0029220470333985132
sim_compute_performance-ego0_max0.0025066738896325647
sim_compute_performance-ego0_mean0.0025066738896325647
sim_compute_performance-ego0_median0.0025066738896325647
sim_compute_performance-ego0_min0.0025066738896325647
sim_compute_performance-npc0_max0.0023532076147687693
sim_compute_performance-npc0_mean0.0023532076147687693
sim_compute_performance-npc0_median0.0023532076147687693
sim_compute_performance-npc0_min0.0023532076147687693
sim_compute_performance-npc1_max0.002502387522174847
sim_compute_performance-npc1_mean0.002502387522174847
sim_compute_performance-npc1_median0.002502387522174847
sim_compute_performance-npc1_min0.002502387522174847
sim_compute_performance-npc2_max0.0025246645274915195
sim_compute_performance-npc2_mean0.0025246645274915195
sim_compute_performance-npc2_median0.0025246645274915195
sim_compute_performance-npc2_min0.0025246645274915195
sim_compute_performance-npc3_max0.002452874700351396
sim_compute_performance-npc3_mean0.002452874700351396
sim_compute_performance-npc3_median0.002452874700351396
sim_compute_performance-npc3_min0.002452874700351396
sim_compute_sim_state_max0.05255350166060976
sim_compute_sim_state_mean0.05255350166060976
sim_compute_sim_state_median0.05255350166060976
sim_compute_sim_state_min0.05255350166060976
sim_render-ego0_max0.004337406748957678
sim_render-ego0_mean0.004337406748957678
sim_render-ego0_median0.004337406748957678
sim_render-ego0_min0.004337406748957678
sim_render-npc0_max0.004347937025891
sim_render-npc0_mean0.004347937025891
sim_render-npc0_median0.004347937025891
sim_render-npc0_min0.004347937025891
sim_render-npc1_max0.00454416880297587
sim_render-npc1_mean0.00454416880297587
sim_render-npc1_median0.00454416880297587
sim_render-npc1_min0.00454416880297587
sim_render-npc2_max0.004561093569540018
sim_render-npc2_mean0.004561093569540018
sim_render-npc2_median0.004561093569540018
sim_render-npc2_min0.004561093569540018
sim_render-npc3_max0.004501215813698783
sim_render-npc3_mean0.004501215813698783
sim_render-npc3_median0.004501215813698783
sim_render-npc3_min0.004501215813698783
simulation-passed1
step_physics_max0.3082898128143405
step_physics_mean0.3082898128143405
step_physics_median0.3082898128143405
step_physics_min0.3082898128143405
survival_time_max16.100000000000094
survival_time_mean16.100000000000094
survival_time_min16.100000000000094
No reset possible
7145713609Andras Beresfsf+ilaido-LFV_multi-sim-validation403successyesgpu-production-spot-0-010:14:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median19.95000000000015
in-drivable-lane_median0.0
driven_lanedir_consec_median5.064153363341315
deviation-center-line_median1.2926813832289414


other stats
agent_compute-ego0_max0.022172074913978577
agent_compute-ego0_mean0.02153053503618775
agent_compute-ego0_median0.022172074913978577
agent_compute-ego0_min0.02024745528060611
agent_compute-ego1_max0.02111417055130005
agent_compute-ego1_mean0.020942492650286988
agent_compute-ego1_median0.02111417055130005
agent_compute-ego1_min0.02059913684826086
agent_compute-ego2_max0.01982649087905884
agent_compute-ego2_mean0.01982649087905884
agent_compute-ego2_median0.01982649087905884
agent_compute-ego2_min0.01982649087905884
agent_compute-ego3_max0.018919888138771056
agent_compute-ego3_mean0.018919888138771056
agent_compute-ego3_median0.018919888138771056
agent_compute-ego3_min0.018919888138771056
complete-iteration_max0.47646469712257383
complete-iteration_mean0.41731841636569583
complete-iteration_median0.47646469712257383
complete-iteration_min0.2990258548519399
deviation-center-line_max3.1330611810833413
deviation-center-line_mean1.4229302075867396
deviation-center-line_min0.5772751121145354
deviation-heading_max3.295404931706402
deviation-heading_mean2.067622180380758
deviation-heading_median2.327732668207539
deviation-heading_min0.5682779761468647
distance-from-start_max3.210434752948629
distance-from-start_mean1.6823026015137983
distance-from-start_median1.4615066837102144
distance-from-start_min0.9983160124339778
driven_any_max9.720904137354308
driven_any_mean5.343484281247657
driven_any_median5.375664246020854
driven_any_min1.274643289038442
driven_lanedir_consec_max9.542340514340292
driven_lanedir_consec_mean5.169167458426595
driven_lanedir_consec_min1.2425289075532178
driven_lanedir_max9.542340514340292
driven_lanedir_mean5.169167458426595
driven_lanedir_median5.064153363341315
driven_lanedir_min1.2425289075532178
get_duckie_state_max2.02178955078125e-06
get_duckie_state_mean1.969282383179114e-06
get_duckie_state_median2.02178955078125e-06
get_duckie_state_min1.8642680479748417e-06
get_robot_state_max0.01652008593082428
get_robot_state_mean0.013832387574041635
get_robot_state_median0.01652008593082428
get_robot_state_min0.008456990860476353
get_state_dump_max0.010692756175994873
get_state_dump_mean0.009539822611478297
get_state_dump_median0.010692756175994873
get_state_dump_min0.0072339554824451405
get_ui_image_max0.02499730885028839
get_ui_image_mean0.023169532762502287
get_ui_image_median0.02499730885028839
get_ui_image_min0.01951398058693008
in-drivable-lane_max1.2500000000000178
in-drivable-lane_mean0.20833333333333628
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-techtrack-000-ego0": {"driven_any": 1.274643289038442, "get_ui_image": 0.02499730885028839, "step_physics": 0.28833358764648437, "survival_time": 19.95000000000015, "driven_lanedir": 1.2425289075532178, "get_state_dump": 0.010692756175994873, "get_robot_state": 0.01652008593082428, "sim_render-ego0": 0.004154184460639953, "sim_render-ego1": 0.004167896509170532, "sim_render-ego2": 0.00420563280582428, "sim_render-ego3": 0.004069299697875977, "get_duckie_state": 2.02178955078125e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.5526037985910608, "agent_compute-ego0": 0.022172074913978577, "agent_compute-ego1": 0.02111417055130005, "agent_compute-ego2": 0.01982649087905884, "agent_compute-ego3": 0.018919888138771056, "complete-iteration": 0.47646469712257383, "set_robot_commands": 0.0027440434694290163, "distance-from-start": 0.9983160124339778, "deviation-center-line": 3.1330611810833413, "driven_lanedir_consec": 1.2425289075532178, "sim_compute_sim_state": 0.016887184977531434, "sim_compute_performance-ego0": 0.0023167163133621218, "sim_compute_performance-ego1": 0.002304027080535889, "sim_compute_performance-ego2": 0.002282293438911438, "sim_compute_performance-ego3": 0.0022752207517623903}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 8.923875368566618, "get_ui_image": 0.02499730885028839, "step_physics": 0.28833358764648437, "survival_time": 19.95000000000015, "driven_lanedir": 8.290332629939707, "get_state_dump": 0.010692756175994873, "get_robot_state": 0.01652008593082428, "sim_render-ego0": 0.004154184460639953, "sim_render-ego1": 0.004167896509170532, "sim_render-ego2": 0.00420563280582428, "sim_render-ego3": 0.004069299697875977, "get_duckie_state": 2.02178955078125e-06, "in-drivable-lane": 1.2500000000000178, "deviation-heading": 3.2295488674325608, "agent_compute-ego0": 0.022172074913978577, "agent_compute-ego1": 0.02111417055130005, "agent_compute-ego2": 0.01982649087905884, "agent_compute-ego3": 0.018919888138771056, "complete-iteration": 0.47646469712257383, "set_robot_commands": 0.0027440434694290163, "distance-from-start": 3.210434752948629, "deviation-center-line": 1.4524239491673197, "driven_lanedir_consec": 8.290332629939707, "sim_compute_sim_state": 0.016887184977531434, "sim_compute_performance-ego0": 0.0023167163133621218, "sim_compute_performance-ego1": 0.002304027080535889, "sim_compute_performance-ego2": 0.002282293438911438, "sim_compute_performance-ego3": 0.0022752207517623903}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 1.3901544004848656, "get_ui_image": 0.02499730885028839, "step_physics": 0.28833358764648437, "survival_time": 19.95000000000015, "driven_lanedir": 1.3778793906768962, "get_state_dump": 0.010692756175994873, "get_robot_state": 0.01652008593082428, "sim_render-ego0": 0.004154184460639953, "sim_render-ego1": 0.004167896509170532, "sim_render-ego2": 0.00420563280582428, "sim_render-ego3": 0.004069299697875977, "get_duckie_state": 2.02178955078125e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.6570359705836406, "agent_compute-ego0": 0.022172074913978577, "agent_compute-ego1": 0.02111417055130005, "agent_compute-ego2": 0.01982649087905884, "agent_compute-ego3": 0.018919888138771056, "complete-iteration": 0.47646469712257383, "set_robot_commands": 0.0027440434694290163, "distance-from-start": 1.378197767506811, "deviation-center-line": 0.7894582366973584, "driven_lanedir_consec": 1.3778793906768962, "sim_compute_sim_state": 0.016887184977531434, "sim_compute_performance-ego0": 0.0023167163133621218, "sim_compute_performance-ego1": 0.002304027080535889, "sim_compute_performance-ego2": 0.002282293438911438, "sim_compute_performance-ego3": 0.0022752207517623903}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 1.845143842418472, "get_ui_image": 0.02499730885028839, "step_physics": 0.28833358764648437, "survival_time": 19.95000000000015, "driven_lanedir": 1.8379740967429232, "get_state_dump": 0.010692756175994873, "get_robot_state": 0.01652008593082428, "sim_render-ego0": 0.004154184460639953, "sim_render-ego1": 0.004167896509170532, "sim_render-ego2": 0.00420563280582428, "sim_render-ego3": 0.004069299697875977, "get_duckie_state": 2.02178955078125e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.5682779761468647, "agent_compute-ego0": 0.022172074913978577, "agent_compute-ego1": 0.02111417055130005, "agent_compute-ego2": 0.01982649087905884, "agent_compute-ego3": 0.018919888138771056, "complete-iteration": 0.47646469712257383, "set_robot_commands": 0.0027440434694290163, "distance-from-start": 1.8382199910135255, "deviation-center-line": 0.5772751121145354, "driven_lanedir_consec": 1.8379740967429232, "sim_compute_sim_state": 0.016887184977531434, "sim_compute_performance-ego0": 0.0023167163133621218, "sim_compute_performance-ego1": 0.002304027080535889, "sim_compute_performance-ego2": 0.002282293438911438, "sim_compute_performance-ego3": 0.0022752207517623903}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 8.906184649623237, "get_ui_image": 0.01951398058693008, "step_physics": 0.19410352246596085, "survival_time": 20.15000000000015, "driven_lanedir": 8.723949211306536, "get_state_dump": 0.0072339554824451405, "get_robot_state": 0.008456990860476353, "sim_render-ego0": 0.004207419286860098, "sim_render-ego1": 0.00433544121166267, "get_duckie_state": 1.8642680479748417e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.295404931706402, "agent_compute-ego0": 0.02024745528060611, "agent_compute-ego1": 0.02059913684826086, "complete-iteration": 0.2990258548519399, "set_robot_commands": 0.00271888003490939, "distance-from-start": 1.12383148526623, "deviation-center-line": 1.2498298679013589, "driven_lanedir_consec": 8.723949211306536, "sim_compute_sim_state": 0.009950369301408824, "sim_compute_performance-ego0": 0.002328887434289007, "sim_compute_performance-ego1": 0.002435711940916458}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 9.720904137354308, "get_ui_image": 0.01951398058693008, "step_physics": 0.19410352246596085, "survival_time": 20.15000000000015, "driven_lanedir": 9.542340514340292, "get_state_dump": 0.0072339554824451405, "get_robot_state": 0.008456990860476353, "sim_render-ego0": 0.004207419286860098, "sim_render-ego1": 0.00433544121166267, "get_duckie_state": 1.8642680479748417e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.1028615378240176, "agent_compute-ego0": 0.02024745528060611, "agent_compute-ego1": 0.02059913684826086, "complete-iteration": 0.2990258548519399, "set_robot_commands": 0.00271888003490939, "distance-from-start": 1.5448155999136182, "deviation-center-line": 1.335532898556524, "driven_lanedir_consec": 9.542340514340292, "sim_compute_sim_state": 0.009950369301408824, "sim_compute_performance-ego0": 0.002328887434289007, "sim_compute_performance-ego1": 0.002435711940916458}}
set_robot_commands_max0.0027440434694290163
set_robot_commands_mean0.0027356556579224748
set_robot_commands_median0.0027440434694290163
set_robot_commands_min0.00271888003490939
sim_compute_performance-ego0_max0.002328887434289007
sim_compute_performance-ego0_mean0.0023207733536710832
sim_compute_performance-ego0_median0.0023167163133621218
sim_compute_performance-ego0_min0.0023167163133621218
sim_compute_performance-ego1_max0.002435711940916458
sim_compute_performance-ego1_mean0.002347922033996078
sim_compute_performance-ego1_median0.002304027080535889
sim_compute_performance-ego1_min0.002304027080535889
sim_compute_performance-ego2_max0.002282293438911438
sim_compute_performance-ego2_mean0.002282293438911438
sim_compute_performance-ego2_median0.002282293438911438
sim_compute_performance-ego2_min0.002282293438911438
sim_compute_performance-ego3_max0.0022752207517623903
sim_compute_performance-ego3_mean0.0022752207517623903
sim_compute_performance-ego3_median0.0022752207517623903
sim_compute_performance-ego3_min0.0022752207517623903
sim_compute_sim_state_max0.016887184977531434
sim_compute_sim_state_mean0.014574913085490564
sim_compute_sim_state_median0.016887184977531434
sim_compute_sim_state_min0.009950369301408824
sim_render-ego0_max0.004207419286860098
sim_render-ego0_mean0.004171929402713335
sim_render-ego0_median0.004154184460639953
sim_render-ego0_min0.004154184460639953
sim_render-ego1_max0.00433544121166267
sim_render-ego1_mean0.0042237447433345785
sim_render-ego1_median0.004167896509170532
sim_render-ego1_min0.004167896509170532
sim_render-ego2_max0.00420563280582428
sim_render-ego2_mean0.00420563280582428
sim_render-ego2_median0.00420563280582428
sim_render-ego2_min0.00420563280582428
sim_render-ego3_max0.004069299697875977
sim_render-ego3_mean0.004069299697875977
sim_render-ego3_median0.004069299697875977
sim_render-ego3_min0.004069299697875977
simulation-passed1
step_physics_max0.28833358764648437
step_physics_mean0.25692356591964316
step_physics_median0.28833358764648437
step_physics_min0.19410352246596085
survival_time_max20.15000000000015
survival_time_mean20.016666666666815
survival_time_min19.95000000000015
No reset possible
7140013535AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-validationsim-2of4successnogpu-production-spot-0-010:08:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.085891535443025
survival_time_median59.99999999999873
deviation-center-line_median1.6128720021396252
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.013284055319157963
agent_compute-ego0_mean0.013284055319157963
agent_compute-ego0_median0.013284055319157963
agent_compute-ego0_min0.013284055319157963
complete-iteration_max0.14616526949911887
complete-iteration_mean0.14616526949911887
complete-iteration_median0.14616526949911887
complete-iteration_min0.14616526949911887
deviation-center-line_max1.6128720021396252
deviation-center-line_mean1.6128720021396252
deviation-center-line_min1.6128720021396252
deviation-heading_max5.7016393130769965
deviation-heading_mean5.7016393130769965
deviation-heading_median5.7016393130769965
deviation-heading_min5.7016393130769965
distance-from-start_max1.1158236408565625
distance-from-start_mean1.1158236408565625
distance-from-start_median1.1158236408565625
distance-from-start_min1.1158236408565625
driven_any_max28.30108494295077
driven_any_mean28.30108494295077
driven_any_median28.30108494295077
driven_any_min28.30108494295077
driven_lanedir_consec_max28.085891535443025
driven_lanedir_consec_mean28.085891535443025
driven_lanedir_consec_min28.085891535443025
driven_lanedir_max28.085891535443025
driven_lanedir_mean28.085891535443025
driven_lanedir_median28.085891535443025
driven_lanedir_min28.085891535443025
get_duckie_state_max1.193283995819727e-06
get_duckie_state_mean1.193283995819727e-06
get_duckie_state_median1.193283995819727e-06
get_duckie_state_min1.193283995819727e-06
get_robot_state_max0.003612735487837081
get_robot_state_mean0.003612735487837081
get_robot_state_median0.003612735487837081
get_robot_state_min0.003612735487837081
get_state_dump_max0.004416974954660687
get_state_dump_mean0.004416974954660687
get_state_dump_median0.004416974954660687
get_state_dump_min0.004416974954660687
get_ui_image_max0.01691228166210165
get_ui_image_mean0.01691228166210165
get_ui_image_median0.01691228166210165
get_ui_image_min0.01691228166210165
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 28.30108494295077, "get_ui_image": 0.01691228166210165, "step_physics": 0.0948130258612589, "survival_time": 59.99999999999873, "driven_lanedir": 28.085891535443025, "get_state_dump": 0.004416974954660687, "get_robot_state": 0.003612735487837081, "sim_render-ego0": 0.0035755572767678548, "get_duckie_state": 1.193283995819727e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.7016393130769965, "agent_compute-ego0": 0.013284055319157963, "complete-iteration": 0.14616526949911887, "set_robot_commands": 0.002224039376327934, "distance-from-start": 1.1158236408565625, "deviation-center-line": 1.6128720021396252, "driven_lanedir_consec": 28.085891535443025, "sim_compute_sim_state": 0.005359454119235252, "sim_compute_performance-ego0": 0.001893052649835464}}
set_robot_commands_max0.002224039376327934
set_robot_commands_mean0.002224039376327934
set_robot_commands_median0.002224039376327934
set_robot_commands_min0.002224039376327934
sim_compute_performance-ego0_max0.001893052649835464
sim_compute_performance-ego0_mean0.001893052649835464
sim_compute_performance-ego0_median0.001893052649835464
sim_compute_performance-ego0_min0.001893052649835464
sim_compute_sim_state_max0.005359454119235252
sim_compute_sim_state_mean0.005359454119235252
sim_compute_sim_state_median0.005359454119235252
sim_compute_sim_state_min0.005359454119235252
sim_render-ego0_max0.0035755572767678548
sim_render-ego0_mean0.0035755572767678548
sim_render-ego0_median0.0035755572767678548
sim_render-ego0_min0.0035755572767678548
simulation-passed1
step_physics_max0.0948130258612589
step_physics_mean0.0948130258612589
step_physics_median0.0948130258612589
step_physics_min0.0948130258612589
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7135013580Andras Beres202-1aido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:09:31
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median25.647713969505123
survival_time_median59.99999999999873
deviation-center-line_median4.0512402169126585
in-drivable-lane_median4.849999999999891


other stats
agent_compute-ego0_max0.017578277659356642
agent_compute-ego0_mean0.017578277659356642
agent_compute-ego0_median0.017578277659356642
agent_compute-ego0_min0.017578277659356642
complete-iteration_max0.20173033647592817
complete-iteration_mean0.20173033647592817
complete-iteration_median0.20173033647592817
complete-iteration_min0.20173033647592817
deviation-center-line_max4.0512402169126585
deviation-center-line_mean4.0512402169126585
deviation-center-line_min4.0512402169126585
deviation-heading_max8.995208523269762
deviation-heading_mean8.995208523269762
deviation-heading_median8.995208523269762
deviation-heading_min8.995208523269762
distance-from-start_max3.4893511165525526
distance-from-start_mean3.4893511165525526
distance-from-start_median3.4893511165525526
distance-from-start_min3.4893511165525526
driven_any_max27.94575763077012
driven_any_mean27.94575763077012
driven_any_median27.94575763077012
driven_any_min27.94575763077012
driven_lanedir_consec_max25.647713969505123
driven_lanedir_consec_mean25.647713969505123
driven_lanedir_consec_min25.647713969505123
driven_lanedir_max25.647713969505123
driven_lanedir_mean25.647713969505123
driven_lanedir_median25.647713969505123
driven_lanedir_min25.647713969505123
get_duckie_state_max2.3889501922632833e-06
get_duckie_state_mean2.3889501922632833e-06
get_duckie_state_median2.3889501922632833e-06
get_duckie_state_min2.3889501922632833e-06
get_robot_state_max0.003838183183058612
get_robot_state_mean0.003838183183058612
get_robot_state_median0.003838183183058612
get_robot_state_min0.003838183183058612
get_state_dump_max0.00490967379720086
get_state_dump_mean0.00490967379720086
get_state_dump_median0.00490967379720086
get_state_dump_min0.00490967379720086
get_ui_image_max0.02219865443208235
get_ui_image_mean0.02219865443208235
get_ui_image_median0.02219865443208235
get_ui_image_min0.02219865443208235
in-drivable-lane_max4.849999999999891
in-drivable-lane_mean4.849999999999891
in-drivable-lane_min4.849999999999891
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 27.94575763077012, "get_ui_image": 0.02219865443208235, "step_physics": 0.13346501651354178, "survival_time": 59.99999999999873, "driven_lanedir": 25.647713969505123, "get_state_dump": 0.00490967379720086, "get_robot_state": 0.003838183183058612, "sim_render-ego0": 0.003828983521282822, "get_duckie_state": 2.3889501922632833e-06, "in-drivable-lane": 4.849999999999891, "deviation-heading": 8.995208523269762, "agent_compute-ego0": 0.017578277659356642, "complete-iteration": 0.20173033647592817, "set_robot_commands": 0.0024375506582903325, "distance-from-start": 3.4893511165525526, "deviation-center-line": 4.0512402169126585, "driven_lanedir_consec": 25.647713969505123, "sim_compute_sim_state": 0.011350045295480287, "sim_compute_performance-ego0": 0.002029872357497902}}
set_robot_commands_max0.0024375506582903325
set_robot_commands_mean0.0024375506582903325
set_robot_commands_median0.0024375506582903325
set_robot_commands_min0.0024375506582903325
sim_compute_performance-ego0_max0.002029872357497902
sim_compute_performance-ego0_mean0.002029872357497902
sim_compute_performance-ego0_median0.002029872357497902
sim_compute_performance-ego0_min0.002029872357497902
sim_compute_sim_state_max0.011350045295480287
sim_compute_sim_state_mean0.011350045295480287
sim_compute_sim_state_median0.011350045295480287
sim_compute_sim_state_min0.011350045295480287
sim_render-ego0_max0.003828983521282822
sim_render-ego0_mean0.003828983521282822
sim_render-ego0_median0.003828983521282822
sim_render-ego0_min0.003828983521282822
simulation-passed1
step_physics_max0.13346501651354178
step_physics_mean0.13346501651354178
step_physics_median0.13346501651354178
step_physics_min0.13346501651354178
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7127413595Andras Beresfsf+ilaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-010:08:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median26.71980833290937
survival_time_median59.99999999999873
deviation-center-line_median3.996814690460657
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.0194587534809986
agent_compute-ego0_mean0.0194587534809986
agent_compute-ego0_median0.0194587534809986
agent_compute-ego0_min0.0194587534809986
complete-iteration_max0.17042180858583475
complete-iteration_mean0.17042180858583475
complete-iteration_median0.17042180858583475
complete-iteration_min0.17042180858583475
deviation-center-line_max3.996814690460657
deviation-center-line_mean3.996814690460657
deviation-center-line_min3.996814690460657
deviation-heading_max10.306263992233411
deviation-heading_mean10.306263992233411
deviation-heading_median10.306263992233411
deviation-heading_min10.306263992233411
distance-from-start_max1.1523792800214756
distance-from-start_mean1.1523792800214756
distance-from-start_median1.1523792800214756
distance-from-start_min1.1523792800214756
driven_any_max27.31264380413081
driven_any_mean27.31264380413081
driven_any_median27.31264380413081
driven_any_min27.31264380413081
driven_lanedir_consec_max26.71980833290937
driven_lanedir_consec_mean26.71980833290937
driven_lanedir_consec_min26.71980833290937
driven_lanedir_max26.71980833290937
driven_lanedir_mean26.71980833290937
driven_lanedir_median26.71980833290937
driven_lanedir_min26.71980833290937
get_duckie_state_max1.2961156560816832e-06
get_duckie_state_mean1.2961156560816832e-06
get_duckie_state_median1.2961156560816832e-06
get_duckie_state_min1.2961156560816832e-06
get_robot_state_max0.0038246618123177582
get_robot_state_mean0.0038246618123177582
get_robot_state_median0.0038246618123177582
get_robot_state_min0.0038246618123177582
get_state_dump_max0.004700611076386743
get_state_dump_mean0.004700611076386743
get_state_dump_median0.004700611076386743
get_state_dump_min0.004700611076386743
get_ui_image_max0.018041873156875495
get_ui_image_mean0.018041873156875495
get_ui_image_median0.018041873156875495
get_ui_image_min0.018041873156875495
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 27.31264380413081, "get_ui_image": 0.018041873156875495, "step_physics": 0.110418792767489, "survival_time": 59.99999999999873, "driven_lanedir": 26.71980833290937, "get_state_dump": 0.004700611076386743, "get_robot_state": 0.0038246618123177582, "sim_render-ego0": 0.0037865805486953824, "get_duckie_state": 1.2961156560816832e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.306263992233411, "agent_compute-ego0": 0.0194587534809986, "complete-iteration": 0.17042180858583475, "set_robot_commands": 0.0024386959012402385, "distance-from-start": 1.1523792800214756, "deviation-center-line": 3.996814690460657, "driven_lanedir_consec": 26.71980833290937, "sim_compute_sim_state": 0.005608655530943859, "sim_compute_performance-ego0": 0.002055600521268694}}
set_robot_commands_max0.0024386959012402385
set_robot_commands_mean0.0024386959012402385
set_robot_commands_median0.0024386959012402385
set_robot_commands_min0.0024386959012402385
sim_compute_performance-ego0_max0.002055600521268694
sim_compute_performance-ego0_mean0.002055600521268694
sim_compute_performance-ego0_median0.002055600521268694
sim_compute_performance-ego0_min0.002055600521268694
sim_compute_sim_state_max0.005608655530943859
sim_compute_sim_state_mean0.005608655530943859
sim_compute_sim_state_median0.005608655530943859
sim_compute_sim_state_min0.005608655530943859
sim_render-ego0_max0.0037865805486953824
sim_render-ego0_mean0.0037865805486953824
sim_render-ego0_median0.0037865805486953824
sim_render-ego0_min0.0037865805486953824
simulation-passed1
step_physics_max0.110418792767489
step_physics_mean0.110418792767489
step_physics_median0.110418792767489
step_physics_min0.110418792767489
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7123913595Andras Beresfsf+ilaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:08:19
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.976418388920443
survival_time_median59.99999999999873
deviation-center-line_median4.319283763081473
in-drivable-lane_median1.3499999999999837


other stats
agent_compute-ego0_max0.017486648496044963
agent_compute-ego0_mean0.017486648496044963
agent_compute-ego0_median0.017486648496044963
agent_compute-ego0_min0.017486648496044963
complete-iteration_max0.16878053965318413
complete-iteration_mean0.16878053965318413
complete-iteration_median0.16878053965318413
complete-iteration_min0.16878053965318413
deviation-center-line_max4.319283763081473
deviation-center-line_mean4.319283763081473
deviation-center-line_min4.319283763081473
deviation-heading_max7.558055824072081
deviation-heading_mean7.558055824072081
deviation-heading_median7.558055824072081
deviation-heading_min7.558055824072081
distance-from-start_max2.84006163253915
distance-from-start_mean2.84006163253915
distance-from-start_median2.84006163253915
distance-from-start_min2.84006163253915
driven_any_max29.837114775415625
driven_any_mean29.837114775415625
driven_any_median29.837114775415625
driven_any_min29.837114775415625
driven_lanedir_consec_max28.976418388920443
driven_lanedir_consec_mean28.976418388920443
driven_lanedir_consec_min28.976418388920443
driven_lanedir_max28.976418388920443
driven_lanedir_mean28.976418388920443
driven_lanedir_median28.976418388920443
driven_lanedir_min28.976418388920443
get_duckie_state_max1.3592439726131543e-06
get_duckie_state_mean1.3592439726131543e-06
get_duckie_state_median1.3592439726131543e-06
get_duckie_state_min1.3592439726131543e-06
get_robot_state_max0.00399018068496234
get_robot_state_mean0.00399018068496234
get_robot_state_median0.00399018068496234
get_robot_state_min0.00399018068496234
get_state_dump_max0.004919417394785758
get_state_dump_mean0.004919417394785758
get_state_dump_median0.004919417394785758
get_state_dump_min0.004919417394785758
get_ui_image_max0.019580870047894047
get_ui_image_mean0.019580870047894047
get_ui_image_median0.019580870047894047
get_ui_image_min0.019580870047894047
in-drivable-lane_max1.3499999999999837
in-drivable-lane_mean1.3499999999999837
in-drivable-lane_min1.3499999999999837
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 29.837114775415625, "get_ui_image": 0.019580870047894047, "step_physics": 0.10572386224700649, "survival_time": 59.99999999999873, "driven_lanedir": 28.976418388920443, "get_state_dump": 0.004919417394785758, "get_robot_state": 0.00399018068496234, "sim_render-ego0": 0.0038901151566580862, "get_duckie_state": 1.3592439726131543e-06, "in-drivable-lane": 1.3499999999999837, "deviation-heading": 7.558055824072081, "agent_compute-ego0": 0.017486648496044963, "complete-iteration": 0.16878053965318413, "set_robot_commands": 0.002552358633671871, "distance-from-start": 2.84006163253915, "deviation-center-line": 4.319283763081473, "driven_lanedir_consec": 28.976418388920443, "sim_compute_sim_state": 0.008459735373275464, "sim_compute_performance-ego0": 0.0020812754825588866}}
set_robot_commands_max0.002552358633671871
set_robot_commands_mean0.002552358633671871
set_robot_commands_median0.002552358633671871
set_robot_commands_min0.002552358633671871
sim_compute_performance-ego0_max0.0020812754825588866
sim_compute_performance-ego0_mean0.0020812754825588866
sim_compute_performance-ego0_median0.0020812754825588866
sim_compute_performance-ego0_min0.0020812754825588866
sim_compute_sim_state_max0.008459735373275464
sim_compute_sim_state_mean0.008459735373275464
sim_compute_sim_state_median0.008459735373275464
sim_compute_sim_state_min0.008459735373275464
sim_render-ego0_max0.0038901151566580862
sim_render-ego0_mean0.0038901151566580862
sim_render-ego0_median0.0038901151566580862
sim_render-ego0_min0.0038901151566580862
simulation-passed1
step_physics_max0.10572386224700649
step_physics_mean0.10572386224700649
step_physics_median0.10572386224700649
step_physics_min0.10572386224700649
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7119214454Cam Linkeexercises_braitenbergmooc-BV1sim-0of5successnogpu-production-spot-0-010:09:29
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean3.278930351947263


other stats
agent_compute-ego0_max0.011864370247567764
agent_compute-ego0_mean0.011864370247567764
agent_compute-ego0_median0.011864370247567764
agent_compute-ego0_min0.011864370247567764
complete-iteration_max0.2380284692200137
complete-iteration_mean0.2380284692200137
complete-iteration_median0.2380284692200137
complete-iteration_min0.2380284692200137
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max3.278930351947263
distance-from-start_median3.278930351947263
distance-from-start_min3.278930351947263
driven_any_max4.379281546817927
driven_any_mean4.379281546817927
driven_any_median4.379281546817927
driven_any_min4.379281546817927
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.09554274742592107
get_duckie_state_mean0.09554274742592107
get_duckie_state_median0.09554274742592107
get_duckie_state_min0.09554274742592107
get_robot_state_max0.004002873774425524
get_robot_state_mean0.004002873774425524
get_robot_state_median0.004002873774425524
get_robot_state_min0.004002873774425524
get_state_dump_max0.019536226344220513
get_state_dump_mean0.019536226344220513
get_state_dump_median0.019536226344220513
get_state_dump_min0.019536226344220513
get_ui_image_max0.015918482972982345
get_ui_image_mean0.015918482972982345
get_ui_image_median0.015918482972982345
get_ui_image_min0.015918482972982345
in-drivable-lane_max53.199999999999115
in-drivable-lane_mean53.199999999999115
in-drivable-lane_median53.199999999999115
in-drivable-lane_min53.199999999999115
per-episodes
details{"d45-ego0": {"driven_any": 4.379281546817927, "get_ui_image": 0.015918482972982345, "step_physics": 0.0731918001398794, "survival_time": 53.199999999999115, "driven_lanedir": 0.0, "get_state_dump": 0.019536226344220513, "get_robot_state": 0.004002873774425524, "sim_render-ego0": 0.0037467567014022618, "get_duckie_state": 0.09554274742592107, "in-drivable-lane": 53.199999999999115, "deviation-heading": 0.0, "agent_compute-ego0": 0.011864370247567764, "complete-iteration": 0.2380284692200137, "set_robot_commands": 0.002387193223120461, "distance-from-start": 3.278930351947263, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.009696651288601154, "sim_compute_performance-ego0": 0.0020444672991972016}}
set_robot_commands_max0.002387193223120461
set_robot_commands_mean0.002387193223120461
set_robot_commands_median0.002387193223120461
set_robot_commands_min0.002387193223120461
sim_compute_performance-ego0_max0.0020444672991972016
sim_compute_performance-ego0_mean0.0020444672991972016
sim_compute_performance-ego0_median0.0020444672991972016
sim_compute_performance-ego0_min0.0020444672991972016
sim_compute_sim_state_max0.009696651288601154
sim_compute_sim_state_mean0.009696651288601154
sim_compute_sim_state_median0.009696651288601154
sim_compute_sim_state_min0.009696651288601154
sim_render-ego0_max0.0037467567014022618
sim_render-ego0_mean0.0037467567014022618
sim_render-ego0_median0.0037467567014022618
sim_render-ego0_min0.0037467567014022618
simulation-passed1
step_physics_max0.0731918001398794
step_physics_mean0.0731918001398794
step_physics_median0.0731918001398794
step_physics_min0.0731918001398794
survival_time_max53.199999999999115
survival_time_mean53.199999999999115
survival_time_median53.199999999999115
survival_time_min53.199999999999115
No reset possible
7116114482Sampsa RantaNeed Z to fly! This duck votes yellow.mooc-BV1sim-1of5successnogpu-production-spot-0-010:05:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
distance-from-start_mean4.941122914855825


other stats
agent_compute-ego0_max0.012266963779807329
agent_compute-ego0_mean0.012266963779807329
agent_compute-ego0_median0.012266963779807329
agent_compute-ego0_min0.012266963779807329
complete-iteration_max0.26940574093969044
complete-iteration_mean0.26940574093969044
complete-iteration_median0.26940574093969044
complete-iteration_min0.26940574093969044
deviation-center-line_max0.0
deviation-center-line_mean0.0
deviation-center-line_median0.0
deviation-center-line_min0.0
deviation-heading_max0.0
deviation-heading_mean0.0
deviation-heading_median0.0
deviation-heading_min0.0
distance-from-start_max4.941122914855825
distance-from-start_median4.941122914855825
distance-from-start_min4.941122914855825
driven_any_max5.016596166778645
driven_any_mean5.016596166778645
driven_any_median5.016596166778645
driven_any_min5.016596166778645
driven_lanedir_consec_max0.0
driven_lanedir_consec_mean0.0
driven_lanedir_consec_median0.0
driven_lanedir_consec_min0.0
driven_lanedir_max0.0
driven_lanedir_mean0.0
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max0.12206962722504212
get_duckie_state_mean0.12206962722504212
get_duckie_state_median0.12206962722504212
get_duckie_state_min0.12206962722504212
get_robot_state_max0.003976399789075414
get_robot_state_mean0.003976399789075414
get_robot_state_median0.003976399789075414
get_robot_state_min0.003976399789075414
get_state_dump_max0.024325137604734377
get_state_dump_mean0.024325137604734377
get_state_dump_median0.024325137604734377
get_state_dump_min0.024325137604734377
get_ui_image_max0.016295616260307753
get_ui_image_mean0.016295616260307753
get_ui_image_median0.016295616260307753
get_ui_image_min0.016295616260307753
in-drivable-lane_max25.00000000000022
in-drivable-lane_mean25.00000000000022
in-drivable-lane_median25.00000000000022
in-drivable-lane_min25.00000000000022
per-episodes
details{"d60-ego0": {"driven_any": 5.016596166778645, "get_ui_image": 0.016295616260307753, "step_physics": 0.07391163546168161, "survival_time": 25.00000000000022, "driven_lanedir": 0.0, "get_state_dump": 0.024325137604734377, "get_robot_state": 0.003976399789075414, "sim_render-ego0": 0.003698377552146683, "get_duckie_state": 0.12206962722504212, "in-drivable-lane": 25.00000000000022, "deviation-heading": 0.0, "agent_compute-ego0": 0.012266963779807329, "complete-iteration": 0.26940574093969044, "set_robot_commands": 0.002312329952826281, "distance-from-start": 4.941122914855825, "deviation-center-line": 0.0, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.00847331682840983, "sim_compute_performance-ego0": 0.001971372825180937}}
set_robot_commands_max0.002312329952826281
set_robot_commands_mean0.002312329952826281
set_robot_commands_median0.002312329952826281
set_robot_commands_min0.002312329952826281
sim_compute_performance-ego0_max0.001971372825180937
sim_compute_performance-ego0_mean0.001971372825180937
sim_compute_performance-ego0_median0.001971372825180937
sim_compute_performance-ego0_min0.001971372825180937
sim_compute_sim_state_max0.00847331682840983
sim_compute_sim_state_mean0.00847331682840983
sim_compute_sim_state_median0.00847331682840983
sim_compute_sim_state_min0.00847331682840983
sim_render-ego0_max0.003698377552146683
sim_render-ego0_mean0.003698377552146683
sim_render-ego0_median0.003698377552146683
sim_render-ego0_min0.003698377552146683
simulation-passed1
step_physics_max0.07391163546168161
step_physics_mean0.07391163546168161
step_physics_median0.07391163546168161
step_physics_min0.07391163546168161
survival_time_max25.00000000000022
survival_time_mean25.00000000000022
survival_time_median25.00000000000022
survival_time_min25.00000000000022
No reset possible
7113013641Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LF-sim-validationsim-0of4successnogpu-production-spot-0-010:09:23
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median24.517852007264064
survival_time_median59.99999999999873
deviation-center-line_median3.2041193496599036
in-drivable-lane_median2.5999999999999517


other stats
agent_compute-ego0_max0.018700611779929995
agent_compute-ego0_mean0.018700611779929995
agent_compute-ego0_median0.018700611779929995
agent_compute-ego0_min0.018700611779929995
complete-iteration_max0.22166376328289655
complete-iteration_mean0.22166376328289655
complete-iteration_median0.22166376328289655
complete-iteration_min0.22166376328289655
deviation-center-line_max3.2041193496599036
deviation-center-line_mean3.2041193496599036
deviation-center-line_min3.2041193496599036
deviation-heading_max11.192708719369604
deviation-heading_mean11.192708719369604
deviation-heading_median11.192708719369604
deviation-heading_min11.192708719369604
distance-from-start_max2.838714518150331
distance-from-start_mean2.838714518150331
distance-from-start_median2.838714518150331
distance-from-start_min2.838714518150331
driven_any_max26.399083760195847
driven_any_mean26.399083760195847
driven_any_median26.399083760195847
driven_any_min26.399083760195847
driven_lanedir_consec_max24.517852007264064
driven_lanedir_consec_mean24.517852007264064
driven_lanedir_consec_min24.517852007264064
driven_lanedir_max24.517852007264064
driven_lanedir_mean24.517852007264064
driven_lanedir_median24.517852007264064
driven_lanedir_min24.517852007264064
get_duckie_state_max1.3540825379281914e-06
get_duckie_state_mean1.3540825379281914e-06
get_duckie_state_median1.3540825379281914e-06
get_duckie_state_min1.3540825379281914e-06
get_robot_state_max0.004000731650041998
get_robot_state_mean0.004000731650041998
get_robot_state_median0.004000731650041998
get_robot_state_min0.004000731650041998
get_state_dump_max0.004919587722130362
get_state_dump_mean0.004919587722130362
get_state_dump_median0.004919587722130362
get_state_dump_min0.004919587722130362
get_ui_image_max0.02780723373260625
get_ui_image_mean0.02780723373260625
get_ui_image_median0.02780723373260625
get_ui_image_min0.02780723373260625
in-drivable-lane_max2.5999999999999517
in-drivable-lane_mean2.5999999999999517
in-drivable-lane_min2.5999999999999517
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 26.399083760195847, "get_ui_image": 0.02780723373260625, "step_physics": 0.14912047751440197, "survival_time": 59.99999999999873, "driven_lanedir": 24.517852007264064, "get_state_dump": 0.004919587722130362, "get_robot_state": 0.004000731650041998, "sim_render-ego0": 0.003858054309562283, "get_duckie_state": 1.3540825379281914e-06, "in-drivable-lane": 2.5999999999999517, "deviation-heading": 11.192708719369604, "agent_compute-ego0": 0.018700611779929995, "complete-iteration": 0.22166376328289655, "set_robot_commands": 0.0025276734767408792, "distance-from-start": 2.838714518150331, "deviation-center-line": 3.2041193496599036, "driven_lanedir_consec": 24.517852007264064, "sim_compute_sim_state": 0.008586462689478332, "sim_compute_performance-ego0": 0.002050145480356844}}
set_robot_commands_max0.0025276734767408792
set_robot_commands_mean0.0025276734767408792
set_robot_commands_median0.0025276734767408792
set_robot_commands_min0.0025276734767408792
sim_compute_performance-ego0_max0.002050145480356844
sim_compute_performance-ego0_mean0.002050145480356844
sim_compute_performance-ego0_median0.002050145480356844
sim_compute_performance-ego0_min0.002050145480356844
sim_compute_sim_state_max0.008586462689478332
sim_compute_sim_state_mean0.008586462689478332
sim_compute_sim_state_median0.008586462689478332
sim_compute_sim_state_min0.008586462689478332
sim_render-ego0_max0.003858054309562283
sim_render-ego0_mean0.003858054309562283
sim_render-ego0_median0.003858054309562283
sim_render-ego0_min0.003858054309562283
simulation-passed1
step_physics_max0.14912047751440197
step_physics_mean0.14912047751440197
step_physics_median0.14912047751440197
step_physics_min0.14912047751440197
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7112014829Bea Baselines 🐀minimal-agent-fullaido-LFV_full-sim-validationsim-0of4successyesgpu-production-spot-0-010:06:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median32.00000000000032
in-drivable-lane_median2.1500000000000306
driven_lanedir_consec_median0.8223744082706665
deviation-center-line_median2.128094514163839


other stats
agent_compute-ego0_max0.030823967944069327
agent_compute-ego0_mean0.030823967944069327
agent_compute-ego0_median0.030823967944069327
agent_compute-ego0_min0.030823967944069327
agent_compute-npc0_max0.03158283717174798
agent_compute-npc0_mean0.03158283717174798
agent_compute-npc0_median0.03158283717174798
agent_compute-npc0_min0.03158283717174798
complete-iteration_max0.26361978630565813
complete-iteration_mean0.26361978630565813
complete-iteration_median0.26361978630565813
complete-iteration_min0.26361978630565813
deviation-center-line_max2.128094514163839
deviation-center-line_mean2.128094514163839
deviation-center-line_min2.128094514163839
deviation-heading_max5.5615249105998545
deviation-heading_mean5.5615249105998545
deviation-heading_median5.5615249105998545
deviation-heading_min5.5615249105998545
distance-from-start_max0.8624786757639772
distance-from-start_mean0.8624786757639772
distance-from-start_median0.8624786757639772
distance-from-start_min0.8624786757639772
driven_any_max1.7930879517520697
driven_any_mean1.7930879517520697
driven_any_median1.7930879517520697
driven_any_min1.7930879517520697
driven_lanedir_consec_max0.8223744082706665
driven_lanedir_consec_mean0.8223744082706665
driven_lanedir_consec_min0.8223744082706665
driven_lanedir_max1.6588417414097092
driven_lanedir_mean1.6588417414097092
driven_lanedir_median1.6588417414097092
driven_lanedir_min1.6588417414097092
get_duckie_state_max1.667070314404373e-06
get_duckie_state_mean1.667070314404373e-06
get_duckie_state_median1.667070314404373e-06
get_duckie_state_min1.667070314404373e-06
get_robot_state_max0.008041496395878785
get_robot_state_mean0.008041496395878785
get_robot_state_median0.008041496395878785
get_robot_state_min0.008041496395878785
get_state_dump_max0.007077785810330728
get_state_dump_mean0.007077785810330728
get_state_dump_median0.007077785810330728
get_state_dump_min0.007077785810330728
get_ui_image_max0.027126335316626775
get_ui_image_mean0.027126335316626775
get_ui_image_median0.027126335316626775
get_ui_image_min0.027126335316626775
in-drivable-lane_max2.1500000000000306
in-drivable-lane_mean2.1500000000000306
in-drivable-lane_min2.1500000000000306
per-episodes
details{"LFV-full-small_loop-000-ego0": {"driven_any": 1.7930879517520697, "get_ui_image": 0.027126335316626775, "step_physics": 0.1312324227111388, "survival_time": 32.00000000000032, "driven_lanedir": 1.6588417414097092, "get_state_dump": 0.007077785810330728, "get_robot_state": 0.008041496395878785, "sim_render-ego0": 0.004075246742474679, "sim_render-npc0": 0.004092519629206189, "get_duckie_state": 1.667070314404373e-06, "in-drivable-lane": 2.1500000000000306, "deviation-heading": 5.5615249105998545, "agent_compute-ego0": 0.030823967944069327, "agent_compute-npc0": 0.03158283717174798, "complete-iteration": 0.26361978630565813, "set_robot_commands": 0.0026280987839245014, "distance-from-start": 0.8624786757639772, "deviation-center-line": 2.128094514163839, "driven_lanedir_consec": 0.8223744082706665, "sim_compute_sim_state": 0.0098857291961051, "sim_compute_performance-ego0": 0.002161659056236517, "sim_compute_performance-npc0": 0.0021928033665078294}}
set_robot_commands_max0.0026280987839245014
set_robot_commands_mean0.0026280987839245014
set_robot_commands_median0.0026280987839245014
set_robot_commands_min0.0026280987839245014
sim_compute_performance-ego0_max0.002161659056236517
sim_compute_performance-ego0_mean0.002161659056236517
sim_compute_performance-ego0_median0.002161659056236517
sim_compute_performance-ego0_min0.002161659056236517
sim_compute_performance-npc0_max0.0021928033665078294
sim_compute_performance-npc0_mean0.0021928033665078294
sim_compute_performance-npc0_median0.0021928033665078294
sim_compute_performance-npc0_min0.0021928033665078294
sim_compute_sim_state_max0.0098857291961051
sim_compute_sim_state_mean0.0098857291961051
sim_compute_sim_state_median0.0098857291961051
sim_compute_sim_state_min0.0098857291961051
sim_render-ego0_max0.004075246742474679
sim_render-ego0_mean0.004075246742474679
sim_render-ego0_median0.004075246742474679
sim_render-ego0_min0.004075246742474679
sim_render-npc0_max0.004092519629206189
sim_render-npc0_mean0.004092519629206189
sim_render-npc0_median0.004092519629206189
sim_render-npc0_min0.004092519629206189
simulation-passed1
step_physics_max0.1312324227111388
step_physics_mean0.1312324227111388
step_physics_median0.1312324227111388
step_physics_min0.1312324227111388
survival_time_max32.00000000000032
survival_time_mean32.00000000000032
survival_time_min32.00000000000032
No reset possible
7109114820Bea Baselines 🐀straightaido-LFV_multi-sim-testing427successyesgpu-production-spot-0-010:10:39
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median21.600000000000172
in-drivable-lane_median14.62500000000018
driven_lanedir_consec_median0.6405505969905163
deviation-center-line_median0.24424163573072977


other stats
agent_compute-ego0_max0.011597588761574126
agent_compute-ego0_mean0.011343420403863796
agent_compute-ego0_median0.011597588761574126
agent_compute-ego0_min0.010835083688443131
agent_compute-ego1_max0.010693249366095453
agent_compute-ego1_mean0.010392941872302455
agent_compute-ego1_median0.01024278812540596
agent_compute-ego1_min0.01024278812540596
agent_compute-ego2_max0.010774891040616994
agent_compute-ego2_mean0.010774891040616994
agent_compute-ego2_median0.010774891040616994
agent_compute-ego2_min0.010774891040616994
agent_compute-ego3_max0.011144933193990871
agent_compute-ego3_mean0.011144933193990871
agent_compute-ego3_median0.011144933193990871
agent_compute-ego3_min0.011144933193990871
complete-iteration_max0.3170477154623829
complete-iteration_mean0.2732283345418293
complete-iteration_median0.3170477154623829
complete-iteration_min0.1855895727007221
deviation-center-line_max0.5240453329184394
deviation-center-line_mean0.2946022856987864
deviation-center-line_min0.14214316689444695
deviation-heading_max2.709120965433242
deviation-heading_mean1.274660272586616
deviation-heading_median0.955914454504739
deviation-heading_min0.8127377490483749
distance-from-start_max3.400473892129533
distance-from-start_mean1.7300536776391244
distance-from-start_median1.564699800256753
distance-from-start_min0.8258184115116032
driven_any_max3.401196466784495
driven_any_mean1.7302723693684186
driven_any_median1.5648582712518095
driven_any_min0.8259122633460878
driven_lanedir_consec_max1.1975992089636844
driven_lanedir_consec_mean0.6398888734294356
driven_lanedir_consec_min0.20264999579512463
driven_lanedir_max1.1975992089636844
driven_lanedir_mean0.6398888734294356
driven_lanedir_median0.6405505969905163
driven_lanedir_min0.20264999579512463
get_duckie_state_max1.693707966088698e-06
get_duckie_state_mean1.664673599053513e-06
get_duckie_state_median1.693707966088698e-06
get_duckie_state_min1.6066048649831432e-06
get_robot_state_max0.01444759930529165
get_robot_state_mean0.012250931119283211
get_robot_state_median0.01444759930529165
get_robot_state_min0.007857594747266335
get_state_dump_max0.009565489396762628
get_state_dump_mean0.008584595633905508
get_state_dump_median0.009565489396762628
get_state_dump_min0.006622808108191273
get_ui_image_max0.030159325852955735
get_ui_image_mean0.028218860504886232
get_ui_image_median0.030159325852955735
get_ui_image_min0.024337929808747223
in-drivable-lane_max19.90000000000017
in-drivable-lane_mean13.92500000000013
in-drivable-lane_min7.700000000000043
per-episodes
details{"LFV_multi-norm-techtrack-000-ego0": {"driven_any": 0.8259122633460878, "get_ui_image": 0.030159325852955735, "step_physics": 0.1662331425840805, "survival_time": 21.600000000000172, "driven_lanedir": 0.20264999579512463, "get_state_dump": 0.009565489396762628, "get_robot_state": 0.01444759930529165, "sim_render-ego0": 0.003907208629898882, "sim_render-ego1": 0.0036662968421918418, "sim_render-ego2": 0.003757126061525411, "sim_render-ego3": 0.0037357251032946017, "get_duckie_state": 1.693707966088698e-06, "in-drivable-lane": 19.90000000000017, "deviation-heading": 0.8127377490483749, "agent_compute-ego0": 0.011597588761574126, "agent_compute-ego1": 0.01024278812540596, "agent_compute-ego2": 0.010774891040616994, "agent_compute-ego3": 0.011144933193990871, "complete-iteration": 0.3170477154623829, "set_robot_commands": 0.0022302538362992, "distance-from-start": 0.8258184115116032, "deviation-center-line": 0.14214316689444695, "driven_lanedir_consec": 0.20264999579512463, "sim_compute_sim_state": 0.02070110058949671, "sim_compute_performance-ego0": 0.0020159292991937723, "sim_compute_performance-ego1": 0.001970242132490832, "sim_compute_performance-ego2": 0.001950908202772603, "sim_compute_performance-ego3": 0.0019334089398108784}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 1.765033489909137, "get_ui_image": 0.030159325852955735, "step_physics": 0.1662331425840805, "survival_time": 21.600000000000172, "driven_lanedir": 0.8877792125682102, "get_state_dump": 0.009565489396762628, "get_robot_state": 0.01444759930529165, "sim_render-ego0": 0.003907208629898882, "sim_render-ego1": 0.0036662968421918418, "sim_render-ego2": 0.003757126061525411, "sim_render-ego3": 0.0037357251032946017, "get_duckie_state": 1.693707966088698e-06, "in-drivable-lane": 15.300000000000171, "deviation-heading": 2.709120965433242, "agent_compute-ego0": 0.011597588761574126, "agent_compute-ego1": 0.01024278812540596, "agent_compute-ego2": 0.010774891040616994, "agent_compute-ego3": 0.011144933193990871, "complete-iteration": 0.3170477154623829, "set_robot_commands": 0.0022302538362992, "distance-from-start": 1.7648216925052462, "deviation-center-line": 0.5240453329184394, "driven_lanedir_consec": 0.8877792125682102, "sim_compute_sim_state": 0.02070110058949671, "sim_compute_performance-ego0": 0.0020159292991937723, "sim_compute_performance-ego1": 0.001970242132490832, "sim_compute_performance-ego2": 0.001950908202772603, "sim_compute_performance-ego3": 0.0019334089398108784}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 1.3646830525944815, "get_ui_image": 0.030159325852955735, "step_physics": 0.1662331425840805, "survival_time": 21.600000000000172, "driven_lanedir": 1.1975992089636844, "get_state_dump": 0.009565489396762628, "get_robot_state": 0.01444759930529165, "sim_render-ego0": 0.003907208629898882, "sim_render-ego1": 0.0036662968421918418, "sim_render-ego2": 0.003757126061525411, "sim_render-ego3": 0.0037357251032946017, "get_duckie_state": 1.693707966088698e-06, "in-drivable-lane": 13.950000000000191, "deviation-heading": 0.9063565933214408, "agent_compute-ego0": 0.011597588761574126, "agent_compute-ego1": 0.01024278812540596, "agent_compute-ego2": 0.010774891040616994, "agent_compute-ego3": 0.011144933193990871, "complete-iteration": 0.3170477154623829, "set_robot_commands": 0.0022302538362992, "distance-from-start": 1.3645779080082594, "deviation-center-line": 0.4535304408604156, "driven_lanedir_consec": 1.1975992089636844, "sim_compute_sim_state": 0.02070110058949671, "sim_compute_performance-ego0": 0.0020159292991937723, "sim_compute_performance-ego1": 0.001970242132490832, "sim_compute_performance-ego2": 0.001950908202772603, "sim_compute_performance-ego3": 0.0019334089398108784}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 3.401196466784495, "get_ui_image": 0.030159325852955735, "step_physics": 0.1662331425840805, "survival_time": 21.600000000000172, "driven_lanedir": 0.655418847241895, "get_state_dump": 0.009565489396762628, "get_robot_state": 0.01444759930529165, "sim_render-ego0": 0.003907208629898882, "sim_render-ego1": 0.0036662968421918418, "sim_render-ego2": 0.003757126061525411, "sim_render-ego3": 0.0037357251032946017, "get_duckie_state": 1.693707966088698e-06, "in-drivable-lane": 16.850000000000183, "deviation-heading": 1.30791741870716, "agent_compute-ego0": 0.011597588761574126, "agent_compute-ego1": 0.01024278812540596, "agent_compute-ego2": 0.010774891040616994, "agent_compute-ego3": 0.011144933193990871, "complete-iteration": 0.3170477154623829, "set_robot_commands": 0.0022302538362992, "distance-from-start": 3.400473892129533, "deviation-center-line": 0.2890110869636866, "driven_lanedir_consec": 0.655418847241895, "sim_compute_sim_state": 0.02070110058949671, "sim_compute_performance-ego0": 0.0020159292991937723, "sim_compute_performance-ego1": 0.001970242132490832, "sim_compute_performance-ego2": 0.001950908202772603, "sim_compute_performance-ego3": 0.0019334089398108784}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 1.216662494360338, "get_ui_image": 0.024337929808747223, "step_physics": 0.0996833253203586, "survival_time": 12.000000000000036, "driven_lanedir": 0.6256823467391375, "get_state_dump": 0.006622808108191273, "get_robot_state": 0.007857594747266335, "sim_render-ego0": 0.004010907841915906, "sim_render-ego1": 0.003988908039583705, "get_duckie_state": 1.6066048649831432e-06, "in-drivable-lane": 7.700000000000043, "deviation-heading": 0.9568643296555384, "agent_compute-ego0": 0.010835083688443131, "agent_compute-ego1": 0.010693249366095453, "complete-iteration": 0.1855895727007221, "set_robot_commands": 0.0023518558359739692, "distance-from-start": 1.2165958864459103, "deviation-center-line": 0.19947218449777288, "driven_lanedir_consec": 0.6256823467391375, "sim_compute_sim_state": 0.008435432347024624, "sim_compute_performance-ego0": 0.0021383109429070563, "sim_compute_performance-ego1": 0.0020754000952629627}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 1.808146449215971, "get_ui_image": 0.024337929808747223, "step_physics": 0.0996833253203586, "survival_time": 12.000000000000036, "driven_lanedir": 0.27020362926856123, "get_state_dump": 0.006622808108191273, "get_robot_state": 0.007857594747266335, "sim_render-ego0": 0.004010907841915906, "sim_render-ego1": 0.003988908039583705, "get_duckie_state": 1.6066048649831432e-06, "in-drivable-lane": 9.850000000000035, "deviation-heading": 0.9549645793539396, "agent_compute-ego0": 0.010835083688443131, "agent_compute-ego1": 0.010693249366095453, "complete-iteration": 0.1855895727007221, "set_robot_commands": 0.0023518558359739692, "distance-from-start": 1.808034275234195, "deviation-center-line": 0.15941150205795723, "driven_lanedir_consec": 0.27020362926856123, "sim_compute_sim_state": 0.008435432347024624, "sim_compute_performance-ego0": 0.0021383109429070563, "sim_compute_performance-ego1": 0.0020754000952629627}}
set_robot_commands_max0.0023518558359739692
set_robot_commands_mean0.0022707878361907896
set_robot_commands_median0.0022302538362992
set_robot_commands_min0.0022302538362992
sim_compute_performance-ego0_max0.0021383109429070563
sim_compute_performance-ego0_mean0.0020567231804315333
sim_compute_performance-ego0_median0.0020159292991937723
sim_compute_performance-ego0_min0.0020159292991937723
sim_compute_performance-ego1_max0.0020754000952629627
sim_compute_performance-ego1_mean0.002005294786748209
sim_compute_performance-ego1_median0.001970242132490832
sim_compute_performance-ego1_min0.001970242132490832
sim_compute_performance-ego2_max0.001950908202772603
sim_compute_performance-ego2_mean0.001950908202772603
sim_compute_performance-ego2_median0.001950908202772603
sim_compute_performance-ego2_min0.001950908202772603
sim_compute_performance-ego3_max0.0019334089398108784
sim_compute_performance-ego3_mean0.0019334089398108784
sim_compute_performance-ego3_median0.0019334089398108784
sim_compute_performance-ego3_min0.0019334089398108784
sim_compute_sim_state_max0.02070110058949671
sim_compute_sim_state_mean0.01661254450867268
sim_compute_sim_state_median0.02070110058949671
sim_compute_sim_state_min0.008435432347024624
sim_render-ego0_max0.004010907841915906
sim_render-ego0_mean0.003941775033904557
sim_render-ego0_median0.003907208629898882
sim_render-ego0_min0.003907208629898882
sim_render-ego1_max0.003988908039583705
sim_render-ego1_mean0.003773833907989129
sim_render-ego1_median0.0036662968421918418
sim_render-ego1_min0.0036662968421918418
sim_render-ego2_max0.003757126061525411
sim_render-ego2_mean0.003757126061525411
sim_render-ego2_median0.003757126061525411
sim_render-ego2_min0.003757126061525411
sim_render-ego3_max0.0037357251032946017
sim_render-ego3_mean0.0037357251032946017
sim_render-ego3_median0.0037357251032946017
sim_render-ego3_min0.0037357251032946017
simulation-passed1
step_physics_max0.1662331425840805
step_physics_mean0.14404987016283988
step_physics_median0.1662331425840805
step_physics_min0.0996833253203586
survival_time_max21.600000000000172
survival_time_mean18.400000000000123
survival_time_min12.000000000000036
No reset possible
7107414824Bea Baselines 🐀straightaido-LFI-sim-testingsim-1of4successnogpu-production-spot-0-010:02:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.350000000000026
in-drivable-lane_median7.850000000000031
driven_lanedir_consec_median0.5031282421170369
deviation-center-line_median0.2611691458704485


other stats
agent_compute-ego0_max0.010504899317758123
agent_compute-ego0_mean0.010504899317758123
agent_compute-ego0_median0.010504899317758123
agent_compute-ego0_min0.010504899317758123
complete-iteration_max0.13623441938768355
complete-iteration_mean0.13623441938768355
complete-iteration_median0.13623441938768355
complete-iteration_min0.13623441938768355
deviation-center-line_max0.2611691458704485
deviation-center-line_mean0.2611691458704485
deviation-center-line_min0.2611691458704485
deviation-heading_max0.9671255119018844
deviation-heading_mean0.9671255119018844
deviation-heading_median0.9671255119018844
deviation-heading_min0.9671255119018844
distance-from-start_max1.7003316309521253
distance-from-start_mean1.7003316309521253
distance-from-start_median1.7003316309521253
distance-from-start_min1.7003316309521253
driven_any_max1.700441507177767
driven_any_mean1.700441507177767
driven_any_median1.700441507177767
driven_any_min1.700441507177767
driven_lanedir_consec_max0.5031282421170369
driven_lanedir_consec_mean0.5031282421170369
driven_lanedir_consec_min0.5031282421170369
driven_lanedir_max0.5031282421170369
driven_lanedir_mean0.5031282421170369
driven_lanedir_median0.5031282421170369
driven_lanedir_min0.5031282421170369
get_duckie_state_max1.2224180656566957e-06
get_duckie_state_mean1.2224180656566957e-06
get_duckie_state_median1.2224180656566957e-06
get_duckie_state_min1.2224180656566957e-06
get_robot_state_max0.003636817137400309
get_robot_state_mean0.003636817137400309
get_robot_state_median0.003636817137400309
get_robot_state_min0.003636817137400309
get_state_dump_max0.0045938585933886075
get_state_dump_mean0.0045938585933886075
get_state_dump_median0.0045938585933886075
get_state_dump_min0.0045938585933886075
get_ui_image_max0.03255174452798408
get_ui_image_mean0.03255174452798408
get_ui_image_median0.03255174452798408
get_ui_image_min0.03255174452798408
in-drivable-lane_max7.850000000000031
in-drivable-lane_mean7.850000000000031
in-drivable-lane_min7.850000000000031
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 1.700441507177767, "get_ui_image": 0.03255174452798408, "step_physics": 0.06737335104691355, "survival_time": 11.350000000000026, "driven_lanedir": 0.5031282421170369, "get_state_dump": 0.0045938585933886075, "get_robot_state": 0.003636817137400309, "sim_render-ego0": 0.00369382427449812, "get_duckie_state": 1.2224180656566957e-06, "in-drivable-lane": 7.850000000000031, "deviation-heading": 0.9671255119018844, "agent_compute-ego0": 0.010504899317758123, "complete-iteration": 0.13623441938768355, "set_robot_commands": 0.0021786051884032133, "distance-from-start": 1.7003316309521253, "deviation-center-line": 0.2611691458704485, "driven_lanedir_consec": 0.5031282421170369, "sim_compute_sim_state": 0.00968877160758303, "sim_compute_performance-ego0": 0.0019340828845375465}}
set_robot_commands_max0.0021786051884032133
set_robot_commands_mean0.0021786051884032133
set_robot_commands_median0.0021786051884032133
set_robot_commands_min0.0021786051884032133
sim_compute_performance-ego0_max0.0019340828845375465
sim_compute_performance-ego0_mean0.0019340828845375465
sim_compute_performance-ego0_median0.0019340828845375465
sim_compute_performance-ego0_min0.0019340828845375465
sim_compute_sim_state_max0.00968877160758303
sim_compute_sim_state_mean0.00968877160758303
sim_compute_sim_state_median0.00968877160758303
sim_compute_sim_state_min0.00968877160758303
sim_render-ego0_max0.00369382427449812
sim_render-ego0_mean0.00369382427449812
sim_render-ego0_median0.00369382427449812
sim_render-ego0_min0.00369382427449812
simulation-passed1
step_physics_max0.06737335104691355
step_physics_mean0.06737335104691355
step_physics_median0.06737335104691355
step_physics_min0.06737335104691355
survival_time_max11.350000000000026
survival_time_mean11.350000000000026
survival_time_min11.350000000000026
No reset possible
7105814824Bea Baselines 🐀straightaido-LFI-sim-testingsim-1of4successnogpu-production-spot-0-010:02:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.350000000000026
in-drivable-lane_median7.850000000000031
driven_lanedir_consec_median0.5031282421170369
deviation-center-line_median0.2611691458704485


other stats
agent_compute-ego0_max0.01037169234794483
agent_compute-ego0_mean0.01037169234794483
agent_compute-ego0_median0.01037169234794483
agent_compute-ego0_min0.01037169234794483
complete-iteration_max0.1384259159104866
complete-iteration_mean0.1384259159104866
complete-iteration_median0.1384259159104866
complete-iteration_min0.1384259159104866
deviation-center-line_max0.2611691458704485
deviation-center-line_mean0.2611691458704485
deviation-center-line_min0.2611691458704485
deviation-heading_max0.9671255119018844
deviation-heading_mean0.9671255119018844
deviation-heading_median0.9671255119018844
deviation-heading_min0.9671255119018844
distance-from-start_max1.7003316309521253
distance-from-start_mean1.7003316309521253
distance-from-start_median1.7003316309521253
distance-from-start_min1.7003316309521253
driven_any_max1.700441507177767
driven_any_mean1.700441507177767
driven_any_median1.700441507177767
driven_any_min1.700441507177767
driven_lanedir_consec_max0.5031282421170369
driven_lanedir_consec_mean0.5031282421170369
driven_lanedir_consec_min0.5031282421170369
driven_lanedir_max0.5031282421170369
driven_lanedir_mean0.5031282421170369
driven_lanedir_median0.5031282421170369
driven_lanedir_min0.5031282421170369
get_duckie_state_max1.4127346507289953e-06
get_duckie_state_mean1.4127346507289953e-06
get_duckie_state_median1.4127346507289953e-06
get_duckie_state_min1.4127346507289953e-06
get_robot_state_max0.003701900181017424
get_robot_state_mean0.003701900181017424
get_robot_state_median0.003701900181017424
get_robot_state_min0.003701900181017424
get_state_dump_max0.004718306817506489
get_state_dump_mean0.004718306817506489
get_state_dump_median0.004718306817506489
get_state_dump_min0.004718306817506489
get_ui_image_max0.03220588491674055
get_ui_image_mean0.03220588491674055
get_ui_image_median0.03220588491674055
get_ui_image_min0.03220588491674055
in-drivable-lane_max7.850000000000031
in-drivable-lane_mean7.850000000000031
in-drivable-lane_min7.850000000000031
per-episodes
details{"LFI-norm-udem1-000-ego0": {"driven_any": 1.700441507177767, "get_ui_image": 0.03220588491674055, "step_physics": 0.0699311703966375, "survival_time": 11.350000000000026, "driven_lanedir": 0.5031282421170369, "get_state_dump": 0.004718306817506489, "get_robot_state": 0.003701900181017424, "sim_render-ego0": 0.0037790840132194657, "get_duckie_state": 1.4127346507289953e-06, "in-drivable-lane": 7.850000000000031, "deviation-heading": 0.9671255119018844, "agent_compute-ego0": 0.01037169234794483, "complete-iteration": 0.1384259159104866, "set_robot_commands": 0.002266524130837959, "distance-from-start": 1.7003316309521253, "deviation-center-line": 0.2611691458704485, "driven_lanedir_consec": 0.5031282421170369, "sim_compute_sim_state": 0.00939272265685232, "sim_compute_performance-ego0": 0.001977934126268353}}
set_robot_commands_max0.002266524130837959
set_robot_commands_mean0.002266524130837959
set_robot_commands_median0.002266524130837959
set_robot_commands_min0.002266524130837959
sim_compute_performance-ego0_max0.001977934126268353
sim_compute_performance-ego0_mean0.001977934126268353
sim_compute_performance-ego0_median0.001977934126268353
sim_compute_performance-ego0_min0.001977934126268353
sim_compute_sim_state_max0.00939272265685232
sim_compute_sim_state_mean0.00939272265685232
sim_compute_sim_state_median0.00939272265685232
sim_compute_sim_state_min0.00939272265685232
sim_render-ego0_max0.0037790840132194657
sim_render-ego0_mean0.0037790840132194657
sim_render-ego0_median0.0037790840132194657
sim_render-ego0_min0.0037790840132194657
simulation-passed1
step_physics_max0.0699311703966375
step_physics_mean0.0699311703966375
step_physics_median0.0699311703966375
step_physics_min0.0699311703966375
survival_time_max11.350000000000026
survival_time_mean11.350000000000026
survival_time_min11.350000000000026
No reset possible
7104414816Bea Baselines 🐀straightaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-010:02:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median14.850000000000076
in-drivable-lane_median13.450000000000076
driven_lanedir_consec_median0.1090692978324248
deviation-center-line_median0.08978598221263635


other stats
agent_compute-ego0_max0.010786014115250353
agent_compute-ego0_mean0.010786014115250353
agent_compute-ego0_median0.010786014115250353
agent_compute-ego0_min0.010786014115250353
complete-iteration_max0.16331726512652917
complete-iteration_mean0.16331726512652917
complete-iteration_median0.16331726512652917
complete-iteration_min0.16331726512652917
deviation-center-line_max0.08978598221263635
deviation-center-line_mean0.08978598221263635
deviation-center-line_min0.08978598221263635
deviation-heading_max1.146160833018269
deviation-heading_mean1.146160833018269
deviation-heading_median1.146160833018269
deviation-heading_min1.146160833018269
distance-from-start_max2.2809361249080293
distance-from-start_mean2.2809361249080293
distance-from-start_median2.2809361249080293
distance-from-start_min2.2809361249080293
driven_any_max2.281451796299496
driven_any_mean2.281451796299496
driven_any_median2.281451796299496
driven_any_min2.281451796299496
driven_lanedir_consec_max0.1090692978324248
driven_lanedir_consec_mean0.1090692978324248
driven_lanedir_consec_min0.1090692978324248
driven_lanedir_max0.1090692978324248
driven_lanedir_mean0.1090692978324248
driven_lanedir_median0.1090692978324248
driven_lanedir_min0.1090692978324248
get_duckie_state_max0.019922019651272153
get_duckie_state_mean0.019922019651272153
get_duckie_state_median0.019922019651272153
get_duckie_state_min0.019922019651272153
get_robot_state_max0.003593672841987354
get_robot_state_mean0.003593672841987354
get_robot_state_median0.003593672841987354
get_robot_state_min0.003593672841987354
get_state_dump_max0.0077104312461494596
get_state_dump_mean0.0077104312461494596
get_state_dump_median0.0077104312461494596
get_state_dump_min0.0077104312461494596
get_ui_image_max0.03340456469747044
get_ui_image_mean0.03340456469747044
get_ui_image_median0.03340456469747044
get_ui_image_min0.03340456469747044
in-drivable-lane_max13.450000000000076
in-drivable-lane_mean13.450000000000076
in-drivable-lane_min13.450000000000076
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 2.281451796299496, "get_ui_image": 0.03340456469747044, "step_physics": 0.06577751060460238, "survival_time": 14.850000000000076, "driven_lanedir": 0.1090692978324248, "get_state_dump": 0.0077104312461494596, "get_robot_state": 0.003593672841987354, "sim_render-ego0": 0.0036412437490168834, "get_duckie_state": 0.019922019651272153, "in-drivable-lane": 13.450000000000076, "deviation-heading": 1.146160833018269, "agent_compute-ego0": 0.010786014115250353, "complete-iteration": 0.16331726512652917, "set_robot_commands": 0.0021423293440133934, "distance-from-start": 2.2809361249080293, "deviation-center-line": 0.08978598221263635, "driven_lanedir_consec": 0.1090692978324248, "sim_compute_sim_state": 0.01439343042821692, "sim_compute_performance-ego0": 0.0018649085256077296}}
set_robot_commands_max0.0021423293440133934
set_robot_commands_mean0.0021423293440133934
set_robot_commands_median0.0021423293440133934
set_robot_commands_min0.0021423293440133934
sim_compute_performance-ego0_max0.0018649085256077296
sim_compute_performance-ego0_mean0.0018649085256077296
sim_compute_performance-ego0_median0.0018649085256077296
sim_compute_performance-ego0_min0.0018649085256077296
sim_compute_sim_state_max0.01439343042821692
sim_compute_sim_state_mean0.01439343042821692
sim_compute_sim_state_median0.01439343042821692
sim_compute_sim_state_min0.01439343042821692
sim_render-ego0_max0.0036412437490168834
sim_render-ego0_mean0.0036412437490168834
sim_render-ego0_median0.0036412437490168834
sim_render-ego0_min0.0036412437490168834
simulation-passed1
step_physics_max0.06577751060460238
step_physics_mean0.06577751060460238
step_physics_median0.06577751060460238
step_physics_min0.06577751060460238
survival_time_max14.850000000000076
survival_time_mean14.850000000000076
survival_time_min14.850000000000076
No reset possible
7103214810Bea Baselines 🐀straightaido-LFI-full-sim-testingsim-1of4successnogpu-production-spot-0-010:01:42
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median3.399999999999996
in-drivable-lane_median1.7499999999999951
driven_lanedir_consec_median0.17706208610992968
deviation-center-line_median0.11919310983729516


other stats
agent_compute-ego0_max0.012125467908555183
agent_compute-ego0_mean0.012125467908555183
agent_compute-ego0_median0.012125467908555183
agent_compute-ego0_min0.012125467908555183
complete-iteration_max0.13851822286412335
complete-iteration_mean0.13851822286412335
complete-iteration_median0.13851822286412335
complete-iteration_min0.13851822286412335
deviation-center-line_max0.11919310983729516
deviation-center-line_mean0.11919310983729516
deviation-center-line_min0.11919310983729516
deviation-heading_max1.0119470830175652
deviation-heading_mean1.0119470830175652
deviation-heading_median1.0119470830175652
deviation-heading_min1.0119470830175652
distance-from-start_max0.38992181148908694
distance-from-start_mean0.38992181148908694
distance-from-start_median0.38992181148908694
distance-from-start_min0.38992181148908694
driven_any_max0.3899290723045062
driven_any_mean0.3899290723045062
driven_any_median0.3899290723045062
driven_any_min0.3899290723045062
driven_lanedir_consec_max0.17706208610992968
driven_lanedir_consec_mean0.17706208610992968
driven_lanedir_consec_min0.17706208610992968
driven_lanedir_max0.17706208610992968
driven_lanedir_mean0.17706208610992968
driven_lanedir_median0.17706208610992968
driven_lanedir_min0.17706208610992968
get_duckie_state_max1.395958057348279e-06
get_duckie_state_mean1.395958057348279e-06
get_duckie_state_median1.395958057348279e-06
get_duckie_state_min1.395958057348279e-06
get_robot_state_max0.003650917523149131
get_robot_state_mean0.003650917523149131
get_robot_state_median0.003650917523149131
get_robot_state_min0.003650917523149131
get_state_dump_max0.00468434112659399
get_state_dump_mean0.00468434112659399
get_state_dump_median0.00468434112659399
get_state_dump_min0.00468434112659399
get_ui_image_max0.03249774808469026
get_ui_image_mean0.03249774808469026
get_ui_image_median0.03249774808469026
get_ui_image_min0.03249774808469026
in-drivable-lane_max1.7499999999999951
in-drivable-lane_mean1.7499999999999951
in-drivable-lane_min1.7499999999999951
per-episodes
details{"LFI-full-udem1-000-ego0": {"driven_any": 0.3899290723045062, "get_ui_image": 0.03249774808469026, "step_physics": 0.06842643281687862, "survival_time": 3.399999999999996, "driven_lanedir": 0.17706208610992968, "get_state_dump": 0.00468434112659399, "get_robot_state": 0.003650917523149131, "sim_render-ego0": 0.0038314902264138927, "get_duckie_state": 1.395958057348279e-06, "in-drivable-lane": 1.7499999999999951, "deviation-heading": 1.0119470830175652, "agent_compute-ego0": 0.012125467908555183, "complete-iteration": 0.13851822286412335, "set_robot_commands": 0.0023347329402315445, "distance-from-start": 0.38992181148908694, "deviation-center-line": 0.11919310983729516, "driven_lanedir_consec": 0.17706208610992968, "sim_compute_sim_state": 0.008843960969344429, "sim_compute_performance-ego0": 0.0020381402278292007}}
set_robot_commands_max0.0023347329402315445
set_robot_commands_mean0.0023347329402315445
set_robot_commands_median0.0023347329402315445
set_robot_commands_min0.0023347329402315445
sim_compute_performance-ego0_max0.0020381402278292007
sim_compute_performance-ego0_mean0.0020381402278292007
sim_compute_performance-ego0_median0.0020381402278292007
sim_compute_performance-ego0_min0.0020381402278292007
sim_compute_sim_state_max0.008843960969344429
sim_compute_sim_state_mean0.008843960969344429
sim_compute_sim_state_median0.008843960969344429
sim_compute_sim_state_min0.008843960969344429
sim_render-ego0_max0.0038314902264138927
sim_render-ego0_mean0.0038314902264138927
sim_render-ego0_median0.0038314902264138927
sim_render-ego0_min0.0038314902264138927
simulation-passed1
step_physics_max0.06842643281687862
step_physics_mean0.06842643281687862
step_physics_median0.06842643281687862
step_physics_min0.06842643281687862
survival_time_max3.399999999999996
survival_time_mean3.399999999999996
survival_time_min3.399999999999996
No reset possible
7101613640Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LF-sim-testingsim-3of4abortednogpu-production-spot-0-010:00:41
KeyboardInterrupt: T [...]
KeyboardInterrupt:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1137, in run_one
    heartbeat()
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 596, in heartbeat
    raise KeyboardInterrupt(msg_)
KeyboardInterrupt: The server told us to abort the job because: The challenge has been updated.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7096013618Raphael Jeanmobile-segmentation-pedestrianaido-LFV-sim-validationsim-3of4abortednogpu-production-spot-0-010:07:46
KeyboardInterrupt: T [...]
KeyboardInterrupt:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1137, in run_one
    heartbeat()
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 596, in heartbeat
    raise KeyboardInterrupt(msg_)
KeyboardInterrupt: The server told us to abort the job because: The challenge has been updated.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7091513634Raphael Jeanmobile-segmentationaido-LFV-sim-validationsim-1of4successnogpu-production-spot-0-010:06:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median12.050000000000036
in-drivable-lane_median0.1999999999999993
driven_lanedir_consec_median4.72053447189324
deviation-center-line_median0.7909410352861372


other stats
agent_compute-ego0_max0.019027656760097537
agent_compute-ego0_mean0.019027656760097537
agent_compute-ego0_median0.019027656760097537
agent_compute-ego0_min0.019027656760097537
agent_compute-npc0_max0.060288679501241886
agent_compute-npc0_mean0.060288679501241886
agent_compute-npc0_median0.060288679501241886
agent_compute-npc0_min0.060288679501241886
agent_compute-npc1_max0.057738508074736794
agent_compute-npc1_mean0.057738508074736794
agent_compute-npc1_median0.057738508074736794
agent_compute-npc1_min0.057738508074736794
agent_compute-npc2_max0.050234624176971186
agent_compute-npc2_mean0.050234624176971186
agent_compute-npc2_median0.050234624176971186
agent_compute-npc2_min0.050234624176971186
agent_compute-npc3_max0.057569934316903104
agent_compute-npc3_mean0.057569934316903104
agent_compute-npc3_median0.057569934316903104
agent_compute-npc3_min0.057569934316903104
complete-iteration_max0.7902330889189539
complete-iteration_mean0.7902330889189539
complete-iteration_median0.7902330889189539
complete-iteration_min0.7902330889189539
deviation-center-line_max0.7909410352861372
deviation-center-line_mean0.7909410352861372
deviation-center-line_min0.7909410352861372
deviation-heading_max2.422609778363414
deviation-heading_mean2.422609778363414
deviation-heading_median2.422609778363414
deviation-heading_min2.422609778363414
distance-from-start_max2.551039119285988
distance-from-start_mean2.551039119285988
distance-from-start_median2.551039119285988
distance-from-start_min2.551039119285988
driven_any_max4.937607484422209
driven_any_mean4.937607484422209
driven_any_median4.937607484422209
driven_any_min4.937607484422209
driven_lanedir_consec_max4.72053447189324
driven_lanedir_consec_mean4.72053447189324
driven_lanedir_consec_min4.72053447189324
driven_lanedir_max4.72053447189324
driven_lanedir_mean4.72053447189324
driven_lanedir_median4.72053447189324
driven_lanedir_min4.72053447189324
get_duckie_state_max1.8048877558432336e-06
get_duckie_state_mean1.8048877558432336e-06
get_duckie_state_median1.8048877558432336e-06
get_duckie_state_min1.8048877558432336e-06
get_robot_state_max0.019359031984628725
get_robot_state_mean0.019359031984628725
get_robot_state_median0.019359031984628725
get_robot_state_min0.019359031984628725
get_state_dump_max0.012273681065267768
get_state_dump_mean0.012273681065267768
get_state_dump_median0.012273681065267768
get_state_dump_min0.012273681065267768
get_ui_image_max0.034985363976029325
get_ui_image_mean0.034985363976029325
get_ui_image_median0.034985363976029325
get_ui_image_min0.034985363976029325
in-drivable-lane_max0.1999999999999993
in-drivable-lane_mean0.1999999999999993
in-drivable-lane_min0.1999999999999993
per-episodes
details{"LFV-norm-zigzag-000-ego0": {"driven_any": 4.937607484422209, "get_ui_image": 0.034985363976029325, "step_physics": 0.3783197856146442, "survival_time": 12.050000000000036, "driven_lanedir": 4.72053447189324, "get_state_dump": 0.012273681065267768, "get_robot_state": 0.019359031984628725, "sim_render-ego0": 0.0040579581063641, "sim_render-npc0": 0.0049351384817076125, "sim_render-npc1": 0.004647605675311128, "sim_render-npc2": 0.004212232660656133, "sim_render-npc3": 0.004296300825008676, "get_duckie_state": 1.8048877558432336e-06, "in-drivable-lane": 0.1999999999999993, "deviation-heading": 2.422609778363414, "agent_compute-ego0": 0.019027656760097537, "agent_compute-npc0": 0.060288679501241886, "agent_compute-npc1": 0.057738508074736794, "agent_compute-npc2": 0.050234624176971186, "agent_compute-npc3": 0.057569934316903104, "complete-iteration": 0.7902330889189539, "set_robot_commands": 0.002674058449169821, "distance-from-start": 2.551039119285988, "deviation-center-line": 0.7909410352861372, "driven_lanedir_consec": 4.72053447189324, "sim_compute_sim_state": 0.052579170416209325, "sim_compute_performance-ego0": 0.002263532197179873, "sim_compute_performance-npc0": 0.0022698493043253245, "sim_compute_performance-npc1": 0.002541441562747167, "sim_compute_performance-npc2": 0.002291282346425963, "sim_compute_performance-npc3": 0.0022344264117154207}}
set_robot_commands_max0.002674058449169821
set_robot_commands_mean0.002674058449169821
set_robot_commands_median0.002674058449169821
set_robot_commands_min0.002674058449169821
sim_compute_performance-ego0_max0.002263532197179873
sim_compute_performance-ego0_mean0.002263532197179873
sim_compute_performance-ego0_median0.002263532197179873
sim_compute_performance-ego0_min0.002263532197179873
sim_compute_performance-npc0_max0.0022698493043253245
sim_compute_performance-npc0_mean0.0022698493043253245
sim_compute_performance-npc0_median0.0022698493043253245
sim_compute_performance-npc0_min0.0022698493043253245
sim_compute_performance-npc1_max0.002541441562747167
sim_compute_performance-npc1_mean0.002541441562747167
sim_compute_performance-npc1_median0.002541441562747167
sim_compute_performance-npc1_min0.002541441562747167
sim_compute_performance-npc2_max0.002291282346425963
sim_compute_performance-npc2_mean0.002291282346425963
sim_compute_performance-npc2_median0.002291282346425963
sim_compute_performance-npc2_min0.002291282346425963
sim_compute_performance-npc3_max0.0022344264117154207
sim_compute_performance-npc3_mean0.0022344264117154207
sim_compute_performance-npc3_median0.0022344264117154207
sim_compute_performance-npc3_min0.0022344264117154207
sim_compute_sim_state_max0.052579170416209325
sim_compute_sim_state_mean0.052579170416209325
sim_compute_sim_state_median0.052579170416209325
sim_compute_sim_state_min0.052579170416209325
sim_render-ego0_max0.0040579581063641
sim_render-ego0_mean0.0040579581063641
sim_render-ego0_median0.0040579581063641
sim_render-ego0_min0.0040579581063641
sim_render-npc0_max0.0049351384817076125
sim_render-npc0_mean0.0049351384817076125
sim_render-npc0_median0.0049351384817076125
sim_render-npc0_min0.0049351384817076125
sim_render-npc1_max0.004647605675311128
sim_render-npc1_mean0.004647605675311128
sim_render-npc1_median0.004647605675311128
sim_render-npc1_min0.004647605675311128
sim_render-npc2_max0.004212232660656133
sim_render-npc2_mean0.004212232660656133
sim_render-npc2_median0.004212232660656133
sim_render-npc2_min0.004212232660656133
sim_render-npc3_max0.004296300825008676
sim_render-npc3_mean0.004296300825008676
sim_render-npc3_median0.004296300825008676
sim_render-npc3_min0.004296300825008676
simulation-passed1
step_physics_max0.3783197856146442
step_physics_mean0.3783197856146442
step_physics_median0.3783197856146442
step_physics_min0.3783197856146442
survival_time_max12.050000000000036
survival_time_mean12.050000000000036
survival_time_min12.050000000000036
No reset possible
7088213625Raphael Jeanmobile-segmentationaido-LF-sim-testingsim-1of4successnogpu-production-spot-0-010:04:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median7.586022745333411
survival_time_median21.600000000000172
deviation-center-line_median0.8527392281021797
in-drivable-lane_median3.45000000000004


other stats
agent_compute-ego0_max0.01919991810382376
agent_compute-ego0_mean0.01919991810382376
agent_compute-ego0_median0.01919991810382376
agent_compute-ego0_min0.01919991810382376
complete-iteration_max0.2509292779838791
complete-iteration_mean0.2509292779838791
complete-iteration_median0.2509292779838791
complete-iteration_min0.2509292779838791
deviation-center-line_max0.8527392281021797
deviation-center-line_mean0.8527392281021797
deviation-center-line_min0.8527392281021797
deviation-heading_max4.588035691265385
deviation-heading_mean4.588035691265385
deviation-heading_median4.588035691265385
deviation-heading_min4.588035691265385
distance-from-start_max3.6587270184951177
distance-from-start_mean3.6587270184951177
distance-from-start_median3.6587270184951177
distance-from-start_min3.6587270184951177
driven_any_max9.10605672912445
driven_any_mean9.10605672912445
driven_any_median9.10605672912445
driven_any_min9.10605672912445
driven_lanedir_consec_max7.586022745333411
driven_lanedir_consec_mean7.586022745333411
driven_lanedir_consec_min7.586022745333411
driven_lanedir_max7.586022745333411
driven_lanedir_mean7.586022745333411
driven_lanedir_median7.586022745333411
driven_lanedir_min7.586022745333411
get_duckie_state_max1.373246966002995e-06
get_duckie_state_mean1.373246966002995e-06
get_duckie_state_median1.373246966002995e-06
get_duckie_state_min1.373246966002995e-06
get_robot_state_max0.0041897742907940376
get_robot_state_mean0.0041897742907940376
get_robot_state_median0.0041897742907940376
get_robot_state_min0.0041897742907940376
get_state_dump_max0.005106919907532443
get_state_dump_mean0.005106919907532443
get_state_dump_median0.005106919907532443
get_state_dump_min0.005106919907532443
get_ui_image_max0.03083724898499099
get_ui_image_mean0.03083724898499099
get_ui_image_median0.03083724898499099
get_ui_image_min0.03083724898499099
in-drivable-lane_max3.45000000000004
in-drivable-lane_mean3.45000000000004
in-drivable-lane_min3.45000000000004
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 9.10605672912445, "get_ui_image": 0.03083724898499099, "step_physics": 0.16925587279691984, "survival_time": 21.600000000000172, "driven_lanedir": 7.586022745333411, "get_state_dump": 0.005106919907532443, "get_robot_state": 0.0041897742907940376, "sim_render-ego0": 0.004109170089968358, "get_duckie_state": 1.373246966002995e-06, "in-drivable-lane": 3.45000000000004, "deviation-heading": 4.588035691265385, "agent_compute-ego0": 0.01919991810382376, "complete-iteration": 0.2509292779838791, "set_robot_commands": 0.0026211578906546015, "distance-from-start": 3.6587270184951177, "deviation-center-line": 0.8527392281021797, "driven_lanedir_consec": 7.586022745333411, "sim_compute_sim_state": 0.013261037390446827, "sim_compute_performance-ego0": 0.002252493939829203}}
set_robot_commands_max0.0026211578906546015
set_robot_commands_mean0.0026211578906546015
set_robot_commands_median0.0026211578906546015
set_robot_commands_min0.0026211578906546015
sim_compute_performance-ego0_max0.002252493939829203
sim_compute_performance-ego0_mean0.002252493939829203
sim_compute_performance-ego0_median0.002252493939829203
sim_compute_performance-ego0_min0.002252493939829203
sim_compute_sim_state_max0.013261037390446827
sim_compute_sim_state_mean0.013261037390446827
sim_compute_sim_state_median0.013261037390446827
sim_compute_sim_state_min0.013261037390446827
sim_render-ego0_max0.004109170089968358
sim_render-ego0_mean0.004109170089968358
sim_render-ego0_median0.004109170089968358
sim_render-ego0_min0.004109170089968358
simulation-passed1
step_physics_max0.16925587279691984
step_physics_mean0.16925587279691984
step_physics_median0.16925587279691984
step_physics_min0.16925587279691984
survival_time_max21.600000000000172
survival_time_mean21.600000000000172
survival_time_min21.600000000000172
No reset possible
7086113625Raphael Jeanmobile-segmentationaido-LF-sim-testingsim-2of4successnogpu-production-spot-0-010:02:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median1.4979072482626865
survival_time_median8.649999999999988
deviation-center-line_median0.21566140857641855
in-drivable-lane_median4.849999999999993


other stats
agent_compute-ego0_max0.019206680100539636
agent_compute-ego0_mean0.019206680100539636
agent_compute-ego0_median0.019206680100539636
agent_compute-ego0_min0.019206680100539636
complete-iteration_max0.18572640830072865
complete-iteration_mean0.18572640830072865
complete-iteration_median0.18572640830072865
complete-iteration_min0.18572640830072865
deviation-center-line_max0.21566140857641855
deviation-center-line_mean0.21566140857641855
deviation-center-line_min0.21566140857641855
deviation-heading_max0.9267935270262616
deviation-heading_mean0.9267935270262616
deviation-heading_median0.9267935270262616
deviation-heading_min0.9267935270262616
distance-from-start_max1.8620726294805088
distance-from-start_mean1.8620726294805088
distance-from-start_median1.8620726294805088
distance-from-start_min1.8620726294805088
driven_any_max3.727027566194445
driven_any_mean3.727027566194445
driven_any_median3.727027566194445
driven_any_min3.727027566194445
driven_lanedir_consec_max1.4979072482626865
driven_lanedir_consec_mean1.4979072482626865
driven_lanedir_consec_min1.4979072482626865
driven_lanedir_max1.4979072482626865
driven_lanedir_mean1.4979072482626865
driven_lanedir_median1.4979072482626865
driven_lanedir_min1.4979072482626865
get_duckie_state_max1.6058998546381106e-06
get_duckie_state_mean1.6058998546381106e-06
get_duckie_state_median1.6058998546381106e-06
get_duckie_state_min1.6058998546381106e-06
get_robot_state_max0.004386748390636225
get_robot_state_mean0.004386748390636225
get_robot_state_median0.004386748390636225
get_robot_state_min0.004386748390636225
get_state_dump_max0.005490012552546358
get_state_dump_mean0.005490012552546358
get_state_dump_median0.005490012552546358
get_state_dump_min0.005490012552546358
get_ui_image_max0.023993367436288417
get_ui_image_mean0.023993367436288417
get_ui_image_median0.023993367436288417
get_ui_image_min0.023993367436288417
in-drivable-lane_max4.849999999999993
in-drivable-lane_mean4.849999999999993
in-drivable-lane_min4.849999999999993
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 3.727027566194445, "get_ui_image": 0.023993367436288417, "step_physics": 0.11711934106103306, "survival_time": 8.649999999999988, "driven_lanedir": 1.4979072482626865, "get_state_dump": 0.005490012552546358, "get_robot_state": 0.004386748390636225, "sim_render-ego0": 0.004229459269293423, "get_duckie_state": 1.6058998546381106e-06, "in-drivable-lane": 4.849999999999993, "deviation-heading": 0.9267935270262616, "agent_compute-ego0": 0.019206680100539636, "complete-iteration": 0.18572640830072865, "set_robot_commands": 0.00274802487472008, "distance-from-start": 1.8620726294805088, "deviation-center-line": 0.21566140857641855, "driven_lanedir_consec": 1.4979072482626865, "sim_compute_sim_state": 0.006138217860254748, "sim_compute_performance-ego0": 0.002299418394592987}}
set_robot_commands_max0.00274802487472008
set_robot_commands_mean0.00274802487472008
set_robot_commands_median0.00274802487472008
set_robot_commands_min0.00274802487472008
sim_compute_performance-ego0_max0.002299418394592987
sim_compute_performance-ego0_mean0.002299418394592987
sim_compute_performance-ego0_median0.002299418394592987
sim_compute_performance-ego0_min0.002299418394592987
sim_compute_sim_state_max0.006138217860254748
sim_compute_sim_state_mean0.006138217860254748
sim_compute_sim_state_median0.006138217860254748
sim_compute_sim_state_min0.006138217860254748
sim_render-ego0_max0.004229459269293423
sim_render-ego0_mean0.004229459269293423
sim_render-ego0_median0.004229459269293423
sim_render-ego0_min0.004229459269293423
simulation-passed1
step_physics_max0.11711934106103306
step_physics_mean0.11711934106103306
step_physics_median0.11711934106103306
step_physics_min0.11711934106103306
survival_time_max8.649999999999988
survival_time_mean8.649999999999988
survival_time_min8.649999999999988
No reset possible
7083113634Raphael Jeanmobile-segmentationaido-LFV-sim-validationsim-2of4successnogpu-production-spot-0-010:03:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median8.04999999999998
in-drivable-lane_median0.0
driven_lanedir_consec_median3.121511813017647
deviation-center-line_median0.3957647315531792


other stats
agent_compute-ego0_max0.018537580231089652
agent_compute-ego0_mean0.018537580231089652
agent_compute-ego0_median0.018537580231089652
agent_compute-ego0_min0.018537580231089652
agent_compute-npc0_max0.03720217722433585
agent_compute-npc0_mean0.03720217722433585
agent_compute-npc0_median0.03720217722433585
agent_compute-npc0_min0.03720217722433585
agent_compute-npc1_max0.0364726855431074
agent_compute-npc1_mean0.0364726855431074
agent_compute-npc1_median0.0364726855431074
agent_compute-npc1_min0.0364726855431074
agent_compute-npc2_max0.035453856727223336
agent_compute-npc2_mean0.035453856727223336
agent_compute-npc2_median0.035453856727223336
agent_compute-npc2_min0.035453856727223336
complete-iteration_max0.49053722546424394
complete-iteration_mean0.49053722546424394
complete-iteration_median0.49053722546424394
complete-iteration_min0.49053722546424394
deviation-center-line_max0.3957647315531792
deviation-center-line_mean0.3957647315531792
deviation-center-line_min0.3957647315531792
deviation-heading_max1.479185519288083
deviation-heading_mean1.479185519288083
deviation-heading_median1.479185519288083
deviation-heading_min1.479185519288083
distance-from-start_max2.493075927361664
distance-from-start_mean2.493075927361664
distance-from-start_median2.493075927361664
distance-from-start_min2.493075927361664
driven_any_max3.184221076730592
driven_any_mean3.184221076730592
driven_any_median3.184221076730592
driven_any_min3.184221076730592
driven_lanedir_consec_max3.121511813017647
driven_lanedir_consec_mean3.121511813017647
driven_lanedir_consec_min3.121511813017647
driven_lanedir_max3.121511813017647
driven_lanedir_mean3.121511813017647
driven_lanedir_median3.121511813017647
driven_lanedir_min3.121511813017647
get_duckie_state_max1.8955748758198304e-06
get_duckie_state_mean1.8955748758198304e-06
get_duckie_state_median1.8955748758198304e-06
get_duckie_state_min1.8955748758198304e-06
get_robot_state_max0.016078669347880797
get_robot_state_mean0.016078669347880797
get_robot_state_median0.016078669347880797
get_robot_state_min0.016078669347880797
get_state_dump_max0.010740163885516884
get_state_dump_mean0.010740163885516884
get_state_dump_median0.010740163885516884
get_state_dump_min0.010740163885516884
get_ui_image_max0.026127113236321345
get_ui_image_mean0.026127113236321345
get_ui_image_median0.026127113236321345
get_ui_image_min0.026127113236321345
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 3.184221076730592, "get_ui_image": 0.026127113236321345, "step_physics": 0.2482999003963706, "survival_time": 8.04999999999998, "driven_lanedir": 3.121511813017647, "get_state_dump": 0.010740163885516884, "get_robot_state": 0.016078669347880797, "sim_render-ego0": 0.004090100158879786, "sim_render-npc0": 0.004580082716765227, "sim_render-npc1": 0.004385066621097518, "sim_render-npc2": 0.004272444748584135, "get_duckie_state": 1.8955748758198304e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.479185519288083, "agent_compute-ego0": 0.018537580231089652, "agent_compute-npc0": 0.03720217722433585, "agent_compute-npc1": 0.0364726855431074, "agent_compute-npc2": 0.035453856727223336, "complete-iteration": 0.49053722546424394, "set_robot_commands": 0.002699473757802704, "distance-from-start": 2.493075927361664, "deviation-center-line": 0.3957647315531792, "driven_lanedir_consec": 3.121511813017647, "sim_compute_sim_state": 0.024112786775753823, "sim_compute_performance-ego0": 0.00225232559957622, "sim_compute_performance-npc0": 0.002239789491818275, "sim_compute_performance-npc1": 0.002353457756984381, "sim_compute_performance-npc2": 0.002307170703087324}}
set_robot_commands_max0.002699473757802704
set_robot_commands_mean0.002699473757802704
set_robot_commands_median0.002699473757802704
set_robot_commands_min0.002699473757802704
sim_compute_performance-ego0_max0.00225232559957622
sim_compute_performance-ego0_mean0.00225232559957622
sim_compute_performance-ego0_median0.00225232559957622
sim_compute_performance-ego0_min0.00225232559957622
sim_compute_performance-npc0_max0.002239789491818275
sim_compute_performance-npc0_mean0.002239789491818275
sim_compute_performance-npc0_median0.002239789491818275
sim_compute_performance-npc0_min0.002239789491818275
sim_compute_performance-npc1_max0.002353457756984381
sim_compute_performance-npc1_mean0.002353457756984381
sim_compute_performance-npc1_median0.002353457756984381
sim_compute_performance-npc1_min0.002353457756984381
sim_compute_performance-npc2_max0.002307170703087324
sim_compute_performance-npc2_mean0.002307170703087324
sim_compute_performance-npc2_median0.002307170703087324
sim_compute_performance-npc2_min0.002307170703087324
sim_compute_sim_state_max0.024112786775753823
sim_compute_sim_state_mean0.024112786775753823
sim_compute_sim_state_median0.024112786775753823
sim_compute_sim_state_min0.024112786775753823
sim_render-ego0_max0.004090100158879786
sim_render-ego0_mean0.004090100158879786
sim_render-ego0_median0.004090100158879786
sim_render-ego0_min0.004090100158879786
sim_render-npc0_max0.004580082716765227
sim_render-npc0_mean0.004580082716765227
sim_render-npc0_median0.004580082716765227
sim_render-npc0_min0.004580082716765227
sim_render-npc1_max0.004385066621097518
sim_render-npc1_mean0.004385066621097518
sim_render-npc1_median0.004385066621097518
sim_render-npc1_min0.004385066621097518
sim_render-npc2_max0.004272444748584135
sim_render-npc2_mean0.004272444748584135
sim_render-npc2_median0.004272444748584135
sim_render-npc2_min0.004272444748584135
simulation-passed1
step_physics_max0.2482999003963706
step_physics_mean0.2482999003963706
step_physics_median0.2482999003963706
step_physics_min0.2482999003963706
survival_time_max8.04999999999998
survival_time_mean8.04999999999998
survival_time_min8.04999999999998
No reset possible
7080713640Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LF-sim-testingsim-0of4successnogpu-production-spot-0-010:04:24
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median9.758689421637422
survival_time_median24.95000000000022
deviation-center-line_median1.3087084195971643
in-drivable-lane_median2.2000000000000313


other stats
agent_compute-ego0_max0.01841104030609131
agent_compute-ego0_mean0.01841104030609131
agent_compute-ego0_median0.01841104030609131
agent_compute-ego0_min0.01841104030609131
complete-iteration_max0.19517608499526976
complete-iteration_mean0.19517608499526976
complete-iteration_median0.19517608499526976
complete-iteration_min0.19517608499526976
deviation-center-line_max1.3087084195971643
deviation-center-line_mean1.3087084195971643
deviation-center-line_min1.3087084195971643
deviation-heading_max3.9487218919284928
deviation-heading_mean3.9487218919284928
deviation-heading_median3.9487218919284928
deviation-heading_min3.9487218919284928
distance-from-start_max3.251106136024505
distance-from-start_mean3.251106136024505
distance-from-start_median3.251106136024505
distance-from-start_min3.251106136024505
driven_any_max10.635095058267964
driven_any_mean10.635095058267964
driven_any_median10.635095058267964
driven_any_min10.635095058267964
driven_lanedir_consec_max9.758689421637422
driven_lanedir_consec_mean9.758689421637422
driven_lanedir_consec_min9.758689421637422
driven_lanedir_max9.758689421637422
driven_lanedir_mean9.758689421637422
driven_lanedir_median9.758689421637422
driven_lanedir_min9.758689421637422
get_duckie_state_max2.2630691528320315e-06
get_duckie_state_mean2.2630691528320315e-06
get_duckie_state_median2.2630691528320315e-06
get_duckie_state_min2.2630691528320315e-06
get_robot_state_max0.004162018299102783
get_robot_state_mean0.004162018299102783
get_robot_state_median0.004162018299102783
get_robot_state_min0.004162018299102783
get_state_dump_max0.005204709053039551
get_state_dump_mean0.005204709053039551
get_state_dump_median0.005204709053039551
get_state_dump_min0.005204709053039551
get_ui_image_max0.020522019386291505
get_ui_image_mean0.020522019386291505
get_ui_image_median0.020522019386291505
get_ui_image_min0.020522019386291505
in-drivable-lane_max2.2000000000000313
in-drivable-lane_mean2.2000000000000313
in-drivable-lane_min2.2000000000000313
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 10.635095058267964, "get_ui_image": 0.020522019386291505, "step_physics": 0.12944625568389892, "survival_time": 24.95000000000022, "driven_lanedir": 9.758689421637422, "get_state_dump": 0.005204709053039551, "get_robot_state": 0.004162018299102783, "sim_render-ego0": 0.004041709423065185, "get_duckie_state": 2.2630691528320315e-06, "in-drivable-lane": 2.2000000000000313, "deviation-heading": 3.9487218919284928, "agent_compute-ego0": 0.01841104030609131, "complete-iteration": 0.19517608499526976, "set_robot_commands": 0.0026465516090393064, "distance-from-start": 3.251106136024505, "deviation-center-line": 1.3087084195971643, "driven_lanedir_consec": 9.758689421637422, "sim_compute_sim_state": 0.008423842906951905, "sim_compute_performance-ego0": 0.0022121362686157225}}
set_robot_commands_max0.0026465516090393064
set_robot_commands_mean0.0026465516090393064
set_robot_commands_median0.0026465516090393064
set_robot_commands_min0.0026465516090393064
sim_compute_performance-ego0_max0.0022121362686157225
sim_compute_performance-ego0_mean0.0022121362686157225
sim_compute_performance-ego0_median0.0022121362686157225
sim_compute_performance-ego0_min0.0022121362686157225
sim_compute_sim_state_max0.008423842906951905
sim_compute_sim_state_mean0.008423842906951905
sim_compute_sim_state_median0.008423842906951905
sim_compute_sim_state_min0.008423842906951905
sim_render-ego0_max0.004041709423065185
sim_render-ego0_mean0.004041709423065185
sim_render-ego0_median0.004041709423065185
sim_render-ego0_min0.004041709423065185
simulation-passed1
step_physics_max0.12944625568389892
step_physics_mean0.12944625568389892
step_physics_median0.12944625568389892
step_physics_min0.12944625568389892
survival_time_max24.95000000000022
survival_time_mean24.95000000000022
survival_time_min24.95000000000022
No reset possible
7075713535AndrΓ‘s KalaposΒ πŸ‡­πŸ‡Ί3090aido-LF-sim-validationsim-1of4successnogpu-production-spot-0-010:10:16
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.663391608144572
survival_time_median59.99999999999873
deviation-center-line_median2.4837613069928355
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.0148037978751177
agent_compute-ego0_mean0.0148037978751177
agent_compute-ego0_median0.0148037978751177
agent_compute-ego0_min0.0148037978751177
complete-iteration_max0.20380437681815905
complete-iteration_mean0.20380437681815905
complete-iteration_median0.20380437681815905
complete-iteration_min0.20380437681815905
deviation-center-line_max2.4837613069928355
deviation-center-line_mean2.4837613069928355
deviation-center-line_min2.4837613069928355
deviation-heading_max8.720961035772198
deviation-heading_mean8.720961035772198
deviation-heading_median8.720961035772198
deviation-heading_min8.720961035772198
distance-from-start_max3.454323023380464
distance-from-start_mean3.454323023380464
distance-from-start_median3.454323023380464
distance-from-start_min3.454323023380464
driven_any_max28.07634727167503
driven_any_mean28.07634727167503
driven_any_median28.07634727167503
driven_any_min28.07634727167503
driven_lanedir_consec_max27.663391608144572
driven_lanedir_consec_mean27.663391608144572
driven_lanedir_consec_min27.663391608144572
driven_lanedir_max27.663391608144572
driven_lanedir_mean27.663391608144572
driven_lanedir_median27.663391608144572
driven_lanedir_min27.663391608144572
get_duckie_state_max2.1818972547088037e-06
get_duckie_state_mean2.1818972547088037e-06
get_duckie_state_median2.1818972547088037e-06
get_duckie_state_min2.1818972547088037e-06
get_robot_state_max0.00388274165017718
get_robot_state_mean0.00388274165017718
get_robot_state_median0.00388274165017718
get_robot_state_min0.00388274165017718
get_state_dump_max0.005007387299422519
get_state_dump_mean0.005007387299422519
get_state_dump_median0.005007387299422519
get_state_dump_min0.005007387299422519
get_ui_image_max0.02216455323015224
get_ui_image_mean0.02216455323015224
get_ui_image_median0.02216455323015224
get_ui_image_min0.02216455323015224
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-techtrack-000-ego0": {"driven_any": 28.07634727167503, "get_ui_image": 0.02216455323015224, "step_physics": 0.13771210165444658, "survival_time": 59.99999999999873, "driven_lanedir": 27.663391608144572, "get_state_dump": 0.005007387299422519, "get_robot_state": 0.00388274165017718, "sim_render-ego0": 0.003872652832018545, "get_duckie_state": 2.1818972547088037e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.720961035772198, "agent_compute-ego0": 0.0148037978751177, "complete-iteration": 0.20380437681815905, "set_robot_commands": 0.00243442203480437, "distance-from-start": 3.454323023380464, "deviation-center-line": 2.4837613069928355, "driven_lanedir_consec": 27.663391608144572, "sim_compute_sim_state": 0.011757502051614703, "sim_compute_performance-ego0": 0.002075834337817342}}
set_robot_commands_max0.00243442203480437
set_robot_commands_mean0.00243442203480437
set_robot_commands_median0.00243442203480437
set_robot_commands_min0.00243442203480437
sim_compute_performance-ego0_max0.002075834337817342
sim_compute_performance-ego0_mean0.002075834337817342
sim_compute_performance-ego0_median0.002075834337817342
sim_compute_performance-ego0_min0.002075834337817342
sim_compute_sim_state_max0.011757502051614703
sim_compute_sim_state_mean0.011757502051614703
sim_compute_sim_state_median0.011757502051614703
sim_compute_sim_state_min0.011757502051614703
sim_render-ego0_max0.003872652832018545
sim_render-ego0_mean0.003872652832018545
sim_render-ego0_median0.003872652832018545
sim_render-ego0_min0.003872652832018545
simulation-passed1
step_physics_max0.13771210165444658
step_physics_mean0.13771210165444658
step_physics_median0.13771210165444658
step_physics_min0.13771210165444658
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7075213564MΓ‘rton TimΒ πŸ‡­πŸ‡Ί3626aido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:02:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.222393088928897
survival_time_median11.200000000000024
deviation-center-line_median0.6843712949412801
in-drivable-lane_median1.1500000000000088


other stats
agent_compute-ego0_max0.04955348968505859
agent_compute-ego0_mean0.04955348968505859
agent_compute-ego0_median0.04955348968505859
agent_compute-ego0_min0.04955348968505859
complete-iteration_max0.2486890951792399
complete-iteration_mean0.2486890951792399
complete-iteration_median0.2486890951792399
complete-iteration_min0.2486890951792399
deviation-center-line_max0.6843712949412801
deviation-center-line_mean0.6843712949412801
deviation-center-line_min0.6843712949412801
deviation-heading_max2.6848469400698836
deviation-heading_mean2.6848469400698836
deviation-heading_median2.6848469400698836
deviation-heading_min2.6848469400698836
distance-from-start_max2.1287279757297237
distance-from-start_mean2.1287279757297237
distance-from-start_median2.1287279757297237
distance-from-start_min2.1287279757297237
driven_any_max4.954379391807778
driven_any_mean4.954379391807778
driven_any_median4.954379391807778
driven_any_min4.954379391807778
driven_lanedir_consec_max4.222393088928897
driven_lanedir_consec_mean4.222393088928897
driven_lanedir_consec_min4.222393088928897
driven_lanedir_max4.222393088928897
driven_lanedir_mean4.222393088928897
driven_lanedir_median4.222393088928897
driven_lanedir_min4.222393088928897
get_duckie_state_max1.326666937934028e-06
get_duckie_state_mean1.326666937934028e-06
get_duckie_state_median1.326666937934028e-06
get_duckie_state_min1.326666937934028e-06
get_robot_state_max0.00393728150261773
get_robot_state_mean0.00393728150261773
get_robot_state_median0.00393728150261773
get_robot_state_min0.00393728150261773
get_state_dump_max0.00488978385925293
get_state_dump_mean0.00488978385925293
get_state_dump_median0.00488978385925293
get_state_dump_min0.00488978385925293
get_ui_image_max0.02488840421040853
get_ui_image_mean0.02488840421040853
get_ui_image_median0.02488840421040853
get_ui_image_min0.02488840421040853
in-drivable-lane_max1.1500000000000088
in-drivable-lane_mean1.1500000000000088
in-drivable-lane_min1.1500000000000088
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 4.954379391807778, "get_ui_image": 0.02488840421040853, "step_physics": 0.1464088206821018, "survival_time": 11.200000000000024, "driven_lanedir": 4.222393088928897, "get_state_dump": 0.00488978385925293, "get_robot_state": 0.00393728150261773, "sim_render-ego0": 0.003919793234931098, "get_duckie_state": 1.326666937934028e-06, "in-drivable-lane": 1.1500000000000088, "deviation-heading": 2.6848469400698836, "agent_compute-ego0": 0.04955348968505859, "complete-iteration": 0.2486890951792399, "set_robot_commands": 0.002449400160047743, "distance-from-start": 2.1287279757297237, "deviation-center-line": 0.6843712949412801, "driven_lanedir_consec": 4.222393088928897, "sim_compute_sim_state": 0.010480081770155164, "sim_compute_performance-ego0": 0.002071701685587565}}
set_robot_commands_max0.002449400160047743
set_robot_commands_mean0.002449400160047743
set_robot_commands_median0.002449400160047743
set_robot_commands_min0.002449400160047743
sim_compute_performance-ego0_max0.002071701685587565
sim_compute_performance-ego0_mean0.002071701685587565
sim_compute_performance-ego0_median0.002071701685587565
sim_compute_performance-ego0_min0.002071701685587565
sim_compute_sim_state_max0.010480081770155164
sim_compute_sim_state_mean0.010480081770155164
sim_compute_sim_state_median0.010480081770155164
sim_compute_sim_state_min0.010480081770155164
sim_render-ego0_max0.003919793234931098
sim_render-ego0_mean0.003919793234931098
sim_render-ego0_median0.003919793234931098
sim_render-ego0_min0.003919793234931098
simulation-passed1
step_physics_max0.1464088206821018
step_physics_mean0.1464088206821018
step_physics_median0.1464088206821018
step_physics_min0.1464088206821018
survival_time_max11.200000000000024
survival_time_mean11.200000000000024
survival_time_min11.200000000000024
No reset possible
7072813611Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-validationsim-3of4successnogpu-production-spot-0-010:10:35
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median24.770852710032177
survival_time_median59.99999999999873
deviation-center-line_median3.307974832238108
in-drivable-lane_median2.049999999999918


other stats
agent_compute-ego0_max0.018827191399694184
agent_compute-ego0_mean0.018827191399694184
agent_compute-ego0_median0.018827191399694184
agent_compute-ego0_min0.018827191399694184
complete-iteration_max0.24396588661390775
complete-iteration_mean0.24396588661390775
complete-iteration_median0.24396588661390775
complete-iteration_min0.24396588661390775
deviation-center-line_max3.307974832238108
deviation-center-line_mean3.307974832238108
deviation-center-line_min3.307974832238108
deviation-heading_max11.897800910523433
deviation-heading_mean11.897800910523433
deviation-heading_median11.897800910523433
deviation-heading_min11.897800910523433
distance-from-start_max3.7331311467048103
distance-from-start_mean3.7331311467048103
distance-from-start_median3.7331311467048103
distance-from-start_min3.7331311467048103
driven_any_max26.4107755736083
driven_any_mean26.4107755736083
driven_any_median26.4107755736083
driven_any_min26.4107755736083
driven_lanedir_consec_max24.770852710032177
driven_lanedir_consec_mean24.770852710032177
driven_lanedir_consec_min24.770852710032177
driven_lanedir_max24.770852710032177
driven_lanedir_mean24.770852710032177
driven_lanedir_median24.770852710032177
driven_lanedir_min24.770852710032177
get_duckie_state_max1.4710088852144695e-06
get_duckie_state_mean1.4710088852144695e-06
get_duckie_state_median1.4710088852144695e-06
get_duckie_state_min1.4710088852144695e-06
get_robot_state_max0.004029946759975919
get_robot_state_mean0.004029946759975919
get_robot_state_median0.004029946759975919
get_robot_state_min0.004029946759975919
get_state_dump_max0.005071880815428163
get_state_dump_mean0.005071880815428163
get_state_dump_median0.005071880815428163
get_state_dump_min0.005071880815428163
get_ui_image_max0.02680318698994226
get_ui_image_mean0.02680318698994226
get_ui_image_median0.02680318698994226
get_ui_image_min0.02680318698994226
in-drivable-lane_max2.049999999999918
in-drivable-lane_mean2.049999999999918
in-drivable-lane_min2.049999999999918
per-episodes
details{"LF-norm-zigzag-000-ego0": {"driven_any": 26.4107755736083, "get_ui_image": 0.02680318698994226, "step_physics": 0.16689792640203244, "survival_time": 59.99999999999873, "driven_lanedir": 24.770852710032177, "get_state_dump": 0.005071880815428163, "get_robot_state": 0.004029946759975919, "sim_render-ego0": 0.00404384134214784, "get_duckie_state": 1.4710088852144695e-06, "in-drivable-lane": 2.049999999999918, "deviation-heading": 11.897800910523433, "agent_compute-ego0": 0.018827191399694184, "complete-iteration": 0.24396588661390775, "set_robot_commands": 0.0025109232315711433, "distance-from-start": 3.7331311467048103, "deviation-center-line": 3.307974832238108, "driven_lanedir_consec": 24.770852710032177, "sim_compute_sim_state": 0.01349261460157358, "sim_compute_performance-ego0": 0.0021857905645950152}}
set_robot_commands_max0.0025109232315711433
set_robot_commands_mean0.0025109232315711433
set_robot_commands_median0.0025109232315711433
set_robot_commands_min0.0025109232315711433
sim_compute_performance-ego0_max0.0021857905645950152
sim_compute_performance-ego0_mean0.0021857905645950152
sim_compute_performance-ego0_median0.0021857905645950152
sim_compute_performance-ego0_min0.0021857905645950152
sim_compute_sim_state_max0.01349261460157358
sim_compute_sim_state_mean0.01349261460157358
sim_compute_sim_state_median0.01349261460157358
sim_compute_sim_state_min0.01349261460157358
sim_render-ego0_max0.00404384134214784
sim_render-ego0_mean0.00404384134214784
sim_render-ego0_median0.00404384134214784
sim_render-ego0_min0.00404384134214784
simulation-passed1
step_physics_max0.16689792640203244
step_physics_mean0.16689792640203244
step_physics_median0.16689792640203244
step_physics_min0.16689792640203244
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7069713641Jean-SΓ©bastien GrondinΒ πŸ‡¨πŸ‡¦exercise_ros_templateaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-010:08:45
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median23.564878465701824
survival_time_median59.99999999999873
deviation-center-line_median3.698152853399909
in-drivable-lane_median3.100000000000027


other stats
agent_compute-ego0_max0.01850808927359728
agent_compute-ego0_mean0.01850808927359728
agent_compute-ego0_median0.01850808927359728
agent_compute-ego0_min0.01850808927359728
complete-iteration_max0.1841620328920668
complete-iteration_mean0.1841620328920668
complete-iteration_median0.1841620328920668
complete-iteration_min0.1841620328920668
deviation-center-line_max3.698152853399909
deviation-center-line_mean3.698152853399909
deviation-center-line_min3.698152853399909
deviation-heading_max14.715861599073484
deviation-heading_mean14.715861599073484
deviation-heading_median14.715861599073484
deviation-heading_min14.715861599073484
distance-from-start_max1.3039094328389882
distance-from-start_mean1.3039094328389882
distance-from-start_median1.3039094328389882
distance-from-start_min1.3039094328389882
driven_any_max26.15196024030985
driven_any_mean26.15196024030985
driven_any_median26.15196024030985
driven_any_min26.15196024030985
driven_lanedir_consec_max23.564878465701824
driven_lanedir_consec_mean23.564878465701824
driven_lanedir_consec_min23.564878465701824
driven_lanedir_max23.564878465701824
driven_lanedir_mean23.564878465701824
driven_lanedir_median23.564878465701824
driven_lanedir_min23.564878465701824
get_duckie_state_max1.4070865018083889e-06
get_duckie_state_mean1.4070865018083889e-06
get_duckie_state_median1.4070865018083889e-06
get_duckie_state_min1.4070865018083889e-06
get_robot_state_max0.004095660955284557
get_robot_state_mean0.004095660955284557
get_robot_state_median0.004095660955284557
get_robot_state_min0.004095660955284557
get_state_dump_max0.0050996471503493585
get_state_dump_mean0.0050996471503493585
get_state_dump_median0.0050996471503493585
get_state_dump_min0.0050996471503493585
get_ui_image_max0.018779390757526587
get_ui_image_mean0.018779390757526587
get_ui_image_median0.018779390757526587
get_ui_image_min0.018779390757526587
in-drivable-lane_max3.100000000000027
in-drivable-lane_mean3.100000000000027
in-drivable-lane_min3.100000000000027
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 26.15196024030985, "get_ui_image": 0.018779390757526587, "step_physics": 0.12273908991500004, "survival_time": 59.99999999999873, "driven_lanedir": 23.564878465701824, "get_state_dump": 0.0050996471503493585, "get_robot_state": 0.004095660955284557, "sim_render-ego0": 0.0040485569082827095, "get_duckie_state": 1.4070865018083889e-06, "in-drivable-lane": 3.100000000000027, "deviation-heading": 14.715861599073484, "agent_compute-ego0": 0.01850808927359728, "complete-iteration": 0.1841620328920668, "set_robot_commands": 0.0025814137391305587, "distance-from-start": 1.3039094328389882, "deviation-center-line": 3.698152853399909, "driven_lanedir_consec": 23.564878465701824, "sim_compute_sim_state": 0.006076762916444243, "sim_compute_performance-ego0": 0.0021406877646339027}}
set_robot_commands_max0.0025814137391305587
set_robot_commands_mean0.0025814137391305587
set_robot_commands_median0.0025814137391305587
set_robot_commands_min0.0025814137391305587
sim_compute_performance-ego0_max0.0021406877646339027
sim_compute_performance-ego0_mean0.0021406877646339027
sim_compute_performance-ego0_median0.0021406877646339027
sim_compute_performance-ego0_min0.0021406877646339027
sim_compute_sim_state_max0.006076762916444243
sim_compute_sim_state_mean0.006076762916444243
sim_compute_sim_state_median0.006076762916444243
sim_compute_sim_state_min0.006076762916444243
sim_render-ego0_max0.0040485569082827095
sim_render-ego0_mean0.0040485569082827095
sim_render-ego0_median0.0040485569082827095
sim_render-ego0_min0.0040485569082827095
simulation-passed1
step_physics_max0.12273908991500004
step_physics_mean0.12273908991500004
step_physics_median0.12273908991500004
step_physics_min0.12273908991500004
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7064813611Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-validationsim-2of4successnogpu-production-spot-0-010:08:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median26.12204876022964
survival_time_median59.99999999999873
deviation-center-line_median2.4717729116718306
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.018688282303567928
agent_compute-ego0_mean0.018688282303567928
agent_compute-ego0_median0.018688282303567928
agent_compute-ego0_min0.018688282303567928
complete-iteration_max0.18994754021808963
complete-iteration_mean0.18994754021808963
complete-iteration_median0.18994754021808963
complete-iteration_min0.18994754021808963
deviation-center-line_max2.4717729116718306
deviation-center-line_mean2.4717729116718306
deviation-center-line_min2.4717729116718306
deviation-heading_max9.816359075535914
deviation-heading_mean9.816359075535914
deviation-heading_median9.816359075535914
deviation-heading_min9.816359075535914
distance-from-start_max1.1055029405462082
distance-from-start_mean1.1055029405462082
distance-from-start_median1.1055029405462082
distance-from-start_min1.1055029405462082
driven_any_max26.65615518078294
driven_any_mean26.65615518078294
driven_any_median26.65615518078294
driven_any_min26.65615518078294
driven_lanedir_consec_max26.12204876022964
driven_lanedir_consec_mean26.12204876022964
driven_lanedir_consec_min26.12204876022964
driven_lanedir_max26.12204876022964
driven_lanedir_mean26.12204876022964
driven_lanedir_median26.12204876022964
driven_lanedir_min26.12204876022964
get_duckie_state_max1.5859500653142238e-06
get_duckie_state_mean1.5859500653142238e-06
get_duckie_state_median1.5859500653142238e-06
get_duckie_state_min1.5859500653142238e-06
get_robot_state_max0.0041456347599712435
get_robot_state_mean0.0041456347599712435
get_robot_state_median0.0041456347599712435
get_robot_state_min0.0041456347599712435
get_state_dump_max0.005177748987418626
get_state_dump_mean0.005177748987418626
get_state_dump_median0.005177748987418626
get_state_dump_min0.005177748987418626
get_ui_image_max0.019157261971530073
get_ui_image_mean0.019157261971530073
get_ui_image_median0.019157261971530073
get_ui_image_min0.019157261971530073
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-small_loop-000-ego0": {"driven_any": 26.65615518078294, "get_ui_image": 0.019157261971530073, "step_physics": 0.1278750999682551, "survival_time": 59.99999999999873, "driven_lanedir": 26.12204876022964, "get_state_dump": 0.005177748987418626, "get_robot_state": 0.0041456347599712435, "sim_render-ego0": 0.003995250603440004, "get_duckie_state": 1.5859500653142238e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.816359075535914, "agent_compute-ego0": 0.018688282303567928, "complete-iteration": 0.18994754021808963, "set_robot_commands": 0.0026135950858745846, "distance-from-start": 1.1055029405462082, "deviation-center-line": 2.4717729116718306, "driven_lanedir_consec": 26.12204876022964, "sim_compute_sim_state": 0.006045157863734466, "sim_compute_performance-ego0": 0.002155168566676004}}
set_robot_commands_max0.0026135950858745846
set_robot_commands_mean0.0026135950858745846
set_robot_commands_median0.0026135950858745846
set_robot_commands_min0.0026135950858745846
sim_compute_performance-ego0_max0.002155168566676004
sim_compute_performance-ego0_mean0.002155168566676004
sim_compute_performance-ego0_median0.002155168566676004
sim_compute_performance-ego0_min0.002155168566676004
sim_compute_sim_state_max0.006045157863734466
sim_compute_sim_state_mean0.006045157863734466
sim_compute_sim_state_median0.006045157863734466
sim_compute_sim_state_min0.006045157863734466
sim_render-ego0_max0.003995250603440004
sim_render-ego0_mean0.003995250603440004
sim_render-ego0_median0.003995250603440004
sim_render-ego0_min0.003995250603440004
simulation-passed1
step_physics_max0.1278750999682551
step_physics_mean0.1278750999682551
step_physics_median0.1278750999682551
step_physics_min0.1278750999682551
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
7064413611Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-validationsim-1of4host-errornogpu-production-spot-0-010:01:11
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/docker_compose.py", line 53, in get_services_id
    container = client.containers.get(container_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/models/containers.py", line 880, in get
    resp = self.client.api.inspect_container(container_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/utils/decorators.py", line 16, in wrapped
    raise errors.NullResource(
docker.errors.NullResource: Resource ID was not provided

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 745, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 959, in run_single
    write_logs(wd, project, services=config["services"])
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/docker_compose.py", line 120, in write_logs
    services2id: Dict[ServiceName, ContainerID] = get_services_id(wd, project, services)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/docker_compose.py", line 63, in get_services_id
    raise DockerComposeFail(msg, output=output.decode(), names=names) from e
duckietown_challenges_runner.docker_compose.DockerComposeFail: Cannot get process ids
β”‚ output: ''
β”‚  names: {}
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
7062514816Bea Baselines 🐀straightaido-LFP-sim-validationsim-0of4successnogpu-production-spot-0-010:05:07
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median14.850000000000076
in-drivable-lane_median13.450000000000076
driven_lanedir_consec_median0.1090692978324248
deviation-center-line_median0.08978598221263635


other stats
agent_compute-ego0_max0.010939183651200876
agent_compute-ego0_mean0.010939183651200876
agent_compute-ego0_median0.010939183651200876
agent_compute-ego0_min0.010939183651200876
complete-iteration_max0.17931029780599095
complete-iteration_mean0.17931029780599095
complete-iteration_median0.17931029780599095
complete-iteration_min0.17931029780599095
deviation-center-line_max0.08978598221263635
deviation-center-line_mean0.08978598221263635
deviation-center-line_min0.08978598221263635
deviation-heading_max1.146160833018269
deviation-heading_mean1.146160833018269
deviation-heading_median1.146160833018269
deviation-heading_min1.146160833018269
distance-from-start_max2.2809361249080293
distance-from-start_mean2.2809361249080293
distance-from-start_median2.2809361249080293
distance-from-start_min2.2809361249080293
driven_any_max2.281451796299496
driven_any_mean2.281451796299496
driven_any_median2.281451796299496
driven_any_min2.281451796299496
driven_lanedir_consec_max0.1090692978324248
driven_lanedir_consec_mean0.1090692978324248
driven_lanedir_consec_min0.1090692978324248
driven_lanedir_max0.1090692978324248
driven_lanedir_mean0.1090692978324248
driven_lanedir_median0.1090692978324248
driven_lanedir_min0.1090692978324248
get_duckie_state_max0.02119639175850273
get_duckie_state_mean0.02119639175850273
get_duckie_state_median0.02119639175850273
get_duckie_state_min0.02119639175850273
get_robot_state_max0.0038236779654586073
get_robot_state_mean0.0038236779654586073
get_robot_state_median0.0038236779654586073
get_robot_state_min0.0038236779654586073
get_state_dump_max0.00954387892012628
get_state_dump_mean0.00954387892012628
get_state_dump_median0.00954387892012628
get_state_dump_min0.00954387892012628
get_ui_image_max0.03561976772026728
get_ui_image_mean0.03561976772026728
get_ui_image_median0.03561976772026728
get_ui_image_min0.03561976772026728
in-drivable-lane_max13.450000000000076
in-drivable-lane_mean13.450000000000076
in-drivable-lane_min13.450000000000076
per-episodes
details{"LFP-norm-zigzag-000-ego0": {"driven_any": 2.281451796299496, "get_ui_image": 0.03561976772026728, "step_physics": 0.07395722162003485, "survival_time": 14.850000000000076, "driven_lanedir": 0.1090692978324248, "get_state_dump": 0.00954387892012628, "get_robot_state": 0.0038236779654586073, "sim_render-ego0": 0.003881315256925238, "get_duckie_state": 0.02119639175850273, "in-drivable-lane": 13.450000000000076, "deviation-heading": 1.146160833018269, "agent_compute-ego0": 0.010939183651200876, "complete-iteration": 0.17931029780599095, "set_robot_commands": 0.0022538012306162177, "distance-from-start": 2.2809361249080293, "deviation-center-line": 0.08978598221263635, "driven_lanedir_consec": 0.1090692978324248, "sim_compute_sim_state": 0.015751005819179868, "sim_compute_performance-ego0": 0.0022493920870275304}}
set_robot_commands_max0.0022538012306162177
set_robot_commands_mean0.0022538012306162177
set_robot_commands_median0.0022538012306162177
set_robot_commands_min0.0022538012306162177
sim_compute_performance-ego0_max0.0022493920870275304
sim_compute_performance-ego0_mean0.0022493920870275304
sim_compute_performance-ego0_median0.0022493920870275304
sim_compute_performance-ego0_min0.0022493920870275304
sim_compute_sim_state_max0.015751005819179868
sim_compute_sim_state_mean0.015751005819179868
sim_compute_sim_state_median0.015751005819179868
sim_compute_sim_state_min0.015751005819179868
sim_render-ego0_max0.003881315256925238
sim_render-ego0_mean0.003881315256925238
sim_render-ego0_median0.003881315256925238
sim_render-ego0_min0.003881315256925238
simulation-passed1
step_physics_max0.07395722162003485
step_physics_mean0.07395722162003485
step_physics_median0.07395722162003485
step_physics_min0.07395722162003485
survival_time_max14.850000000000076
survival_time_mean14.850000000000076
survival_time_min14.850000000000076
No reset possible