Duckietown Challenges Home Challenges Submissions

Evaluator 5013

ID5013
evaluatorreg02
ownerI don't have one 😀
machinearchimede_c3f15323bf18
processreg02_c3f15323bf18
version6.2.5
first heard
last heard
statusinactive
# evaluating
# success38 62058
# timeout
# failed26 62065
# error1 62116
# aborted7 62258
# host-error12 62050
arm0
x86_641
Mac0
gpu available1
Number of processors40
Processor frequency (MHz)2.1 GHz
Free % of processors88%
RAM total (MB)376.6 GB
RAM free (MB)241.4 GB
Disk (MB)74209.1 GB
Disk available (MB)59013.2 GB
Docker Hub
P11
P2
Cloud simulations1
PI Camera0
# Duckiebots0
Map 3x3 avaiable
Number of duckies
gpu cores
AIDO 2 Map LF public
AIDO 2 Map LF private
AIDO 2 Map LFV public
AIDO 2 Map LFV private
AIDO 2 Map LFVI public
AIDO 2 Map LFVI private
AIDO 3 Map LF public
AIDO 3 Map LF private
AIDO 3 Map LFV public
AIDO 3 Map LFV private
AIDO 3 Map LFVI public
AIDO 3 Map LFVI private
AIDO 5 Map large loop
ETU track
for 2021, map is ETH_small_inter
IPFS mountpoint /ipfs available
IPNS mountpoint /ipns available

Evaluator jobs

Job IDsubmissionuseruser labelchallengestepstatusup to dateevaluatordate starteddate completeddurationmessage
6305514032YU CHENCBC V2, mar28 bc, mar31_apr6 anomaly aido-LFP-sim-validation350successyesreg020:11:53
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.874999999999984
in-drivable-lane_median2.6750000000000043
driven_lanedir_consec_median1.710897560905252
deviation-center-line_median0.35078201053652647


other stats
agent_compute-ego0_max0.15557641301836286
agent_compute-ego0_mean0.15156300381638765
agent_compute-ego0_median0.15064880332926986
agent_compute-ego0_min0.14937799558864803
complete-iteration_max0.6601814966575772
complete-iteration_mean0.5843315209678637
complete-iteration_median0.6069792886284526
complete-iteration_min0.46318600995697246
deviation-center-line_max0.529555831057489
deviation-center-line_mean0.3668637358682947
deviation-center-line_min0.23633509134263664
deviation-heading_max2.170328925725048
deviation-heading_mean1.6712708530125089
deviation-heading_median1.5859792119462628
deviation-heading_min1.3427960624324615
driven_any_max3.6355177367070537
driven_any_mean2.654083352976131
driven_any_median2.4444362108497444
driven_any_min2.0919432534979805
driven_lanedir_consec_max2.408865692840397
driven_lanedir_consec_mean1.7539915035954758
driven_lanedir_consec_min1.1853051997310022
driven_lanedir_max2.408865692840397
driven_lanedir_mean1.7539915035954758
driven_lanedir_median1.710897560905252
driven_lanedir_min1.1853051997310022
get_duckie_state_max0.09565005813326156
get_duckie_state_mean0.0710433619991823
get_duckie_state_median0.08649475202343981
get_duckie_state_min0.015533885816588018
get_robot_state_max0.014162217538187824
get_robot_state_mean0.013131937969682927
get_robot_state_median0.01299178854102346
get_robot_state_min0.012381957258496966
get_state_dump_max0.030659447397504536
get_state_dump_mean0.024876000060545148
get_state_dump_median0.025822007501495277
get_state_dump_min0.017200537841685497
get_ui_image_max0.0651553144641951
get_ui_image_mean0.057198112906418
get_ui_image_median0.0579834707944546
get_ui_image_min0.04767019557256768
in-drivable-lane_max3.449999999999989
in-drivable-lane_mean2.4249999999999985
in-drivable-lane_min0.8999999999999968
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.512247884368353, "get_ui_image": 0.05715076582772391, "step_physics": 0.204959545816694, "survival_time": 6.949999999999983, "driven_lanedir": 1.657139391319567, "get_state_dump": 0.030659447397504536, "get_robot_state": 0.012381957258496966, "sim_render-ego0": 0.01228536878313337, "get_duckie_state": 0.09565005813326156, "in-drivable-lane": 2.199999999999993, "deviation-heading": 1.4798720343357967, "agent_compute-ego0": 0.15557641301836286, "complete-iteration": 0.6026310886655535, "set_robot_commands": 0.00636211633682251, "deviation-center-line": 0.3646293804956887, "driven_lanedir_consec": 1.657139391319567, "sim_compute_sim_state": 0.020417390550885883, "sim_compute_performance-ego0": 0.006964669908796038}, "LFP-norm-zigzag-000-ego0": {"driven_any": 3.6355177367070537, "get_ui_image": 0.0651553144641951, "step_physics": 0.2578115323010613, "survival_time": 10.15000000000001, "driven_lanedir": 2.408865692840397, "get_state_dump": 0.0279464791802799, "get_robot_state": 0.013409753640492758, "sim_render-ego0": 0.011054298456977396, "get_duckie_state": 0.08802426562589757, "in-drivable-lane": 3.150000000000016, "deviation-heading": 2.170328925725048, "agent_compute-ego0": 0.15026911450367347, "complete-iteration": 0.6601814966575772, "set_robot_commands": 0.009878517365923116, "deviation-center-line": 0.529555831057489, "driven_lanedir_consec": 2.408865692840397, "sim_compute_sim_state": 0.02920422133277444, "sim_compute_performance-ego0": 0.007203561418196734}, "LFP-norm-techtrack-000-ego0": {"driven_any": 2.376624537331135, "get_ui_image": 0.058816175761185294, "step_physics": 0.22298614058907576, "survival_time": 6.299999999999986, "driven_lanedir": 1.7646557304909374, "get_state_dump": 0.023697535822710652, "get_robot_state": 0.014162217538187824, "sim_render-ego0": 0.012445840309924029, "get_duckie_state": 0.08496523842098207, "in-drivable-lane": 0.8999999999999968, "deviation-heading": 1.692086389556729, "agent_compute-ego0": 0.14937799558864803, "complete-iteration": 0.6113274885913519, "set_robot_commands": 0.008724163836381566, "deviation-center-line": 0.3369346405773643, "driven_lanedir_consec": 1.7646557304909374, "sim_compute_sim_state": 0.029151593606303056, "sim_compute_performance-ego0": 0.006776100068580447}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.0919432534979805, "get_ui_image": 0.04767019557256768, "step_physics": 0.16864428903064588, "survival_time": 6.799999999999984, "driven_lanedir": 1.1853051997310022, "get_state_dump": 0.017200537841685497, "get_robot_state": 0.01257382344155416, "sim_render-ego0": 0.012594318737948898, "get_duckie_state": 0.015533885816588018, "in-drivable-lane": 3.449999999999989, "deviation-heading": 1.3427960624324615, "agent_compute-ego0": 0.15102849215486625, "complete-iteration": 0.46318600995697246, "set_robot_commands": 0.01035955874589238, "deviation-center-line": 0.23633509134263664, "driven_lanedir_consec": 1.1853051997310022, "sim_compute_sim_state": 0.01872815354897158, "sim_compute_performance-ego0": 0.008599048113300853}}
set_robot_commands_max0.01035955874589238
set_robot_commands_mean0.008831089071254893
set_robot_commands_median0.00930134060115234
set_robot_commands_min0.00636211633682251
sim_compute_performance-ego0_max0.008599048113300853
sim_compute_performance-ego0_mean0.007385844877218518
sim_compute_performance-ego0_median0.0070841156634963865
sim_compute_performance-ego0_min0.006776100068580447
sim_compute_sim_state_max0.02920422133277444
sim_compute_sim_state_mean0.024375339759733744
sim_compute_sim_state_median0.02478449207859447
sim_compute_sim_state_min0.01872815354897158
sim_render-ego0_max0.012594318737948898
sim_render-ego0_mean0.012094956571995923
sim_render-ego0_median0.0123656045465287
sim_render-ego0_min0.011054298456977396
simulation-passed1
step_physics_max0.2578115323010613
step_physics_mean0.21360037693436923
step_physics_median0.21397284320288487
step_physics_min0.16864428903064588
survival_time_max10.15000000000001
survival_time_mean7.54999999999999
survival_time_min6.299999999999986
No reset possible
6285913925Patrick Genevatemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:41
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A929b5188a9b5978ea78814123e2e9dc311d26d3ce658c7d75a975d0e04f6503d&fromImage=docker.io%2Fgoldbattle%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for goldbattle/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/goldbattle/aido-submissions@sha256:929b5188a9b5978ea78814123e2e9dc311d26d3ce658c7d75a975d0e04f6503d  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/goldbattle/aido-submissions@sha256:929b5188a9b5978ea78814123e2e9dc311d26d3ce658c7d75a975d0e04f6503d
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6271113870Peder Seflandtemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6271013870Peder Seflandtemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:37
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6268413866Peder Seflandtemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6268313866Peder Seflandtemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6268213866Peder Seflandtemplate-randomaido-hello-sim-validation370host-erroryesreg020:00:35
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6266013866Peder Seflandtemplate-randomaido-hello-sim-validation370abortedyesreg020:00:35
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6265913866Peder Seflandtemplate-randomaido-hello-sim-validation370abortedyesreg020:00:35
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3A2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809&fromImage=docker.io%2Fh0mlab%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for h0mlab/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/h0mlab/aido-submissions@sha256:2a8b625a92c565f6505a4c3cc11e9240bd29d6c68007963ac067d49418699809
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6259613842Gopi Palaniappantemplate-randomaido-hello-sim-validation370successyesreg020:01:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.032821042971177536
agent_compute-ego0_mean0.032821042971177536
agent_compute-ego0_median0.032821042971177536
agent_compute-ego0_min0.032821042971177536
complete-iteration_max0.3010998043147
complete-iteration_mean0.3010998043147
complete-iteration_median0.3010998043147
complete-iteration_min0.3010998043147
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.0204493295062672
get_duckie_state_mean0.0204493295062672
get_duckie_state_median0.0204493295062672
get_duckie_state_min0.0204493295062672
get_robot_state_max0.012748756191947245
get_robot_state_mean0.012748756191947245
get_robot_state_median0.012748756191947245
get_robot_state_min0.012748756191947245
get_state_dump_max0.017933357845653187
get_state_dump_mean0.017933357845653187
get_state_dump_median0.017933357845653187
get_state_dump_min0.017933357845653187
get_ui_image_max0.04723713072863492
get_ui_image_mean0.04723713072863492
get_ui_image_median0.04723713072863492
get_ui_image_min0.04723713072863492
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.04723713072863492, "step_physics": 0.11703097278421576, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.017933357845653187, "get_robot_state": 0.012748756191947245, "sim_render-ego0": 0.011904770677739923, "get_duckie_state": 0.0204493295062672, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.032821042971177536, "complete-iteration": 0.3010998043147, "set_robot_commands": 0.013516225598075172, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.01757814125581221, "sim_compute_performance-ego0": 0.009653882546858356}}
set_robot_commands_max0.013516225598075172
set_robot_commands_mean0.013516225598075172
set_robot_commands_median0.013516225598075172
set_robot_commands_min0.013516225598075172
sim_compute_performance-ego0_max0.009653882546858356
sim_compute_performance-ego0_mean0.009653882546858356
sim_compute_performance-ego0_median0.009653882546858356
sim_compute_performance-ego0_min0.009653882546858356
sim_compute_sim_state_max0.01757814125581221
sim_compute_sim_state_mean0.01757814125581221
sim_compute_sim_state_median0.01757814125581221
sim_compute_sim_state_min0.01757814125581221
sim_render-ego0_max0.011904770677739923
sim_render-ego0_mean0.011904770677739923
sim_render-ego0_median0.011904770677739923
sim_render-ego0_min0.011904770677739923
simulation-passed1
step_physics_max0.11703097278421576
step_physics_mean0.11703097278421576
step_physics_median0.11703097278421576
step_physics_min0.11703097278421576
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
6257513835Christian Becktemplate-randomaido-hello-sim-validation370successyesreg020:01:44
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.03965851935473355
agent_compute-ego0_mean0.03965851935473355
agent_compute-ego0_median0.03965851935473355
agent_compute-ego0_min0.03965851935473355
complete-iteration_max0.29540821638974274
complete-iteration_mean0.29540821638974274
complete-iteration_median0.29540821638974274
complete-iteration_min0.29540821638974274
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.017410310831936924
get_duckie_state_mean0.017410310831936924
get_duckie_state_median0.017410310831936924
get_duckie_state_min0.017410310831936924
get_robot_state_max0.01641800728711215
get_robot_state_mean0.01641800728711215
get_robot_state_median0.01641800728711215
get_robot_state_min0.01641800728711215
get_state_dump_max0.019070121374997227
get_state_dump_mean0.019070121374997227
get_state_dump_median0.019070121374997227
get_state_dump_min0.019070121374997227
get_ui_image_max0.0460990233854814
get_ui_image_mean0.0460990233854814
get_ui_image_median0.0460990233854814
get_ui_image_min0.0460990233854814
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.0460990233854814, "step_physics": 0.10638183897191827, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.019070121374997227, "get_robot_state": 0.01641800728711215, "sim_render-ego0": 0.014617643573067406, "get_duckie_state": 0.017410310831936924, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.03965851935473355, "complete-iteration": 0.29540821638974274, "set_robot_commands": 0.007910441268574108, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.019698289307680996, "sim_compute_performance-ego0": 0.007922134616158226}}
set_robot_commands_max0.007910441268574108
set_robot_commands_mean0.007910441268574108
set_robot_commands_median0.007910441268574108
set_robot_commands_min0.007910441268574108
sim_compute_performance-ego0_max0.007922134616158226
sim_compute_performance-ego0_mean0.007922134616158226
sim_compute_performance-ego0_median0.007922134616158226
sim_compute_performance-ego0_min0.007922134616158226
sim_compute_sim_state_max0.019698289307680996
sim_compute_sim_state_mean0.019698289307680996
sim_compute_sim_state_median0.019698289307680996
sim_compute_sim_state_min0.019698289307680996
sim_render-ego0_max0.014617643573067406
sim_render-ego0_mean0.014617643573067406
sim_render-ego0_median0.014617643573067406
sim_render-ego0_min0.014617643573067406
simulation-passed1
step_physics_max0.10638183897191827
step_physics_mean0.10638183897191827
step_physics_median0.10638183897191827
step_physics_min0.10638183897191827
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
6248913800Raymond Pfafftemplate-randomaido-hello-sim-validation370successyesreg020:01:48
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.032694626938213005
agent_compute-ego0_mean0.032694626938213005
agent_compute-ego0_median0.032694626938213005
agent_compute-ego0_min0.032694626938213005
complete-iteration_max0.2660932649265636
complete-iteration_mean0.2660932649265636
complete-iteration_median0.2660932649265636
complete-iteration_min0.2660932649265636
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.01187355409968983
get_duckie_state_mean0.01187355409968983
get_duckie_state_median0.01187355409968983
get_duckie_state_min0.01187355409968983
get_robot_state_max0.010834043676202948
get_robot_state_mean0.010834043676202948
get_robot_state_median0.010834043676202948
get_robot_state_min0.010834043676202948
get_state_dump_max0.01917796243320812
get_state_dump_mean0.01917796243320812
get_state_dump_median0.01917796243320812
get_state_dump_min0.01917796243320812
get_ui_image_max0.045136787674643776
get_ui_image_mean0.045136787674643776
get_ui_image_median0.045136787674643776
get_ui_image_min0.045136787674643776
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.045136787674643776, "step_physics": 0.10873993960293855, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.01917796243320812, "get_robot_state": 0.010834043676202948, "sim_render-ego0": 0.011182980103926226, "get_duckie_state": 0.01187355409968983, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.032694626938213005, "complete-iteration": 0.2660932649265636, "set_robot_commands": 0.007163020697506991, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.012938737869262695, "sim_compute_performance-ego0": 0.006081927906383167}}
set_robot_commands_max0.007163020697506991
set_robot_commands_mean0.007163020697506991
set_robot_commands_median0.007163020697506991
set_robot_commands_min0.007163020697506991
sim_compute_performance-ego0_max0.006081927906383167
sim_compute_performance-ego0_mean0.006081927906383167
sim_compute_performance-ego0_median0.006081927906383167
sim_compute_performance-ego0_min0.006081927906383167
sim_compute_sim_state_max0.012938737869262695
sim_compute_sim_state_mean0.012938737869262695
sim_compute_sim_state_median0.012938737869262695
sim_compute_sim_state_min0.012938737869262695
sim_render-ego0_max0.011182980103926226
sim_render-ego0_mean0.011182980103926226
sim_render-ego0_median0.011182980103926226
sim_render-ego0_min0.011182980103926226
simulation-passed1
step_physics_max0.10873993960293855
step_physics_mean0.10873993960293855
step_physics_median0.10873993960293855
step_physics_min0.10873993960293855
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
6247313798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortedyesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6247213798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortedyesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6247113798Nicholas Kostelniktemplate-randomaido-hello-sim-validation370abortedyesreg020:00:36
Uncaught exception: [...]
Uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 261, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/create?tag=sha256%3Ab13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691&fromImage=docker.io%2Fnitaigao%2Faido-submissions

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 65, in docker_pull
    pulling = client.api.pull(repository=repository, tag=br.tag, stream=True, decode=True)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/image.py", line 415, in pull
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 263, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.ImageNotFound: 404 Client Error: Not Found ("pull access denied for nitaigao/aido-submissions, repository does not exist or may require 'docker login': denied: requested access to the resource is denied")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 38, in docker_pull_retry
    return docker_pull(client, image_name, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 84, in docker_pull
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: Cannot pull repo  docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691  tag  None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 777, in get_cr
    cr = run_single(
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 976, in run_single
    docker_pull_retry(client, image, ntimes=4, wait=5)
  File "/usr/local/lib/python3.8/dist-packages/duckietown_build_utils/docker_pulling.py", line 42, in docker_pull_retry
    raise PullError(msg) from e
duckietown_build_utils.docker_pulling.PullError: After trying 4 I still could not pull docker.io/nitaigao/aido-submissions@sha256:b13078d04947eb3a802ebc4e9db985f6a60c2a3cae145d65fbd44ef0177e1691
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6244013788Tass Otertemplate-randomaido-hello-sim-validation370successyesreg020:01:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median2.1500000000000004
in-drivable-lane_median1.15
driven_lanedir_consec_median0.2357331065993169
deviation-center-line_median0.05863921966719952


other stats
agent_compute-ego0_max0.040158244696530426
agent_compute-ego0_mean0.040158244696530426
agent_compute-ego0_median0.040158244696530426
agent_compute-ego0_min0.040158244696530426
complete-iteration_max0.2877045382152904
complete-iteration_mean0.2877045382152904
complete-iteration_median0.2877045382152904
complete-iteration_min0.2877045382152904
deviation-center-line_max0.05863921966719952
deviation-center-line_mean0.05863921966719952
deviation-center-line_min0.05863921966719952
deviation-heading_max0.46108604520141144
deviation-heading_mean0.46108604520141144
deviation-heading_median0.46108604520141144
deviation-heading_min0.46108604520141144
driven_any_max0.4603380961060899
driven_any_mean0.4603380961060899
driven_any_median0.4603380961060899
driven_any_min0.4603380961060899
driven_lanedir_consec_max0.2357331065993169
driven_lanedir_consec_mean0.2357331065993169
driven_lanedir_consec_min0.2357331065993169
driven_lanedir_max0.2357331065993169
driven_lanedir_mean0.2357331065993169
driven_lanedir_median0.2357331065993169
driven_lanedir_min0.2357331065993169
get_duckie_state_max0.013264401392503218
get_duckie_state_mean0.013264401392503218
get_duckie_state_median0.013264401392503218
get_duckie_state_min0.013264401392503218
get_robot_state_max0.013390887867320667
get_robot_state_mean0.013390887867320667
get_robot_state_median0.013390887867320667
get_robot_state_min0.013390887867320667
get_state_dump_max0.01958321983164007
get_state_dump_mean0.01958321983164007
get_state_dump_median0.01958321983164007
get_state_dump_min0.01958321983164007
get_ui_image_max0.04899234663356434
get_ui_image_mean0.04899234663356434
get_ui_image_median0.04899234663356434
get_ui_image_min0.04899234663356434
in-drivable-lane_max1.15
in-drivable-lane_mean1.15
in-drivable-lane_min1.15
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.4603380961060899, "get_ui_image": 0.04899234663356434, "step_physics": 0.10534048080444336, "survival_time": 2.1500000000000004, "driven_lanedir": 0.2357331065993169, "get_state_dump": 0.01958321983164007, "get_robot_state": 0.013390887867320667, "sim_render-ego0": 0.011238043958490544, "get_duckie_state": 0.013264401392503218, "in-drivable-lane": 1.15, "deviation-heading": 0.46108604520141144, "agent_compute-ego0": 0.040158244696530426, "complete-iteration": 0.2877045382152904, "set_robot_commands": 0.009259717030958696, "deviation-center-line": 0.05863921966719952, "driven_lanedir_consec": 0.2357331065993169, "sim_compute_sim_state": 0.01827640966935591, "sim_compute_performance-ego0": 0.007958059961145575}}
set_robot_commands_max0.009259717030958696
set_robot_commands_mean0.009259717030958696
set_robot_commands_median0.009259717030958696
set_robot_commands_min0.009259717030958696
sim_compute_performance-ego0_max0.007958059961145575
sim_compute_performance-ego0_mean0.007958059961145575
sim_compute_performance-ego0_median0.007958059961145575
sim_compute_performance-ego0_min0.007958059961145575
sim_compute_sim_state_max0.01827640966935591
sim_compute_sim_state_mean0.01827640966935591
sim_compute_sim_state_median0.01827640966935591
sim_compute_sim_state_min0.01827640966935591
sim_render-ego0_max0.011238043958490544
sim_render-ego0_mean0.011238043958490544
sim_render-ego0_median0.011238043958490544
sim_render-ego0_min0.011238043958490544
simulation-passed1
step_physics_max0.10534048080444336
step_physics_mean0.10534048080444336
step_physics_median0.10534048080444336
step_physics_min0.10534048080444336
survival_time_max2.1500000000000004
survival_time_mean2.1500000000000004
survival_time_min2.1500000000000004
No reset possible
6238313763Andrea Censi 🇨🇭straightaido-hello-sim-validation370successnoreg020:02:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.199999999999993
in-drivable-lane_median0.0
driven_lanedir_consec_median0.5156583766631895
deviation-center-line_median0.11159597685449854


other stats
agent_compute-ego0_max0.032339985230389764
agent_compute-ego0_mean0.032339985230389764
agent_compute-ego0_median0.032339985230389764
agent_compute-ego0_min0.032339985230389764
complete-iteration_max0.32305496159721825
complete-iteration_mean0.32305496159721825
complete-iteration_median0.32305496159721825
complete-iteration_min0.32305496159721825
deviation-center-line_max0.11159597685449854
deviation-center-line_mean0.11159597685449854
deviation-center-line_min0.11159597685449854
deviation-heading_max0.5550782670524196
deviation-heading_mean0.5550782670524196
deviation-heading_median0.5550782670524196
deviation-heading_min0.5550782670524196
driven_any_max0.518718406586057
driven_any_mean0.518718406586057
driven_any_median0.518718406586057
driven_any_min0.518718406586057
driven_lanedir_consec_max0.5156583766631895
driven_lanedir_consec_mean0.5156583766631895
driven_lanedir_consec_min0.5156583766631895
driven_lanedir_max0.5156583766631895
driven_lanedir_mean0.5156583766631895
driven_lanedir_median0.5156583766631895
driven_lanedir_min0.5156583766631895
get_duckie_state_max0.053798891516292795
get_duckie_state_mean0.053798891516292795
get_duckie_state_median0.053798891516292795
get_duckie_state_min0.053798891516292795
get_robot_state_max0.01636351417092716
get_robot_state_mean0.01636351417092716
get_robot_state_median0.01636351417092716
get_robot_state_min0.01636351417092716
get_state_dump_max0.022936147802016315
get_state_dump_mean0.022936147802016315
get_state_dump_median0.022936147802016315
get_state_dump_min0.022936147802016315
get_ui_image_max0.0503745247335995
get_ui_image_mean0.0503745247335995
get_ui_image_median0.0503745247335995
get_ui_image_min0.0503745247335995
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"hello-norm-small_loop-000-ego0": {"driven_any": 0.518718406586057, "get_ui_image": 0.0503745247335995, "step_physics": 0.09993023872375488, "survival_time": 4.199999999999993, "driven_lanedir": 0.5156583766631895, "get_state_dump": 0.022936147802016315, "get_robot_state": 0.01636351417092716, "sim_render-ego0": 0.01349167543299058, "get_duckie_state": 0.053798891516292795, "in-drivable-lane": 0.0, "deviation-heading": 0.5550782670524196, "agent_compute-ego0": 0.032339985230389764, "complete-iteration": 0.32305496159721825, "set_robot_commands": 0.009257114634794348, "deviation-center-line": 0.11159597685449854, "driven_lanedir_consec": 0.5156583766631895, "sim_compute_sim_state": 0.016949993021347944, "sim_compute_performance-ego0": 0.007351502250222599}}
set_robot_commands_max0.009257114634794348
set_robot_commands_mean0.009257114634794348
set_robot_commands_median0.009257114634794348
set_robot_commands_min0.009257114634794348
sim_compute_performance-ego0_max0.007351502250222599
sim_compute_performance-ego0_mean0.007351502250222599
sim_compute_performance-ego0_median0.007351502250222599
sim_compute_performance-ego0_min0.007351502250222599
sim_compute_sim_state_max0.016949993021347944
sim_compute_sim_state_mean0.016949993021347944
sim_compute_sim_state_median0.016949993021347944
sim_compute_sim_state_min0.016949993021347944
sim_render-ego0_max0.01349167543299058
sim_render-ego0_mean0.01349167543299058
sim_render-ego0_median0.01349167543299058
sim_render-ego0_min0.01349167543299058
simulation-passed1
step_physics_max0.09993023872375488
step_physics_mean0.09993023872375488
step_physics_median0.09993023872375488
step_physics_min0.09993023872375488
survival_time_max4.199999999999993
survival_time_mean4.199999999999993
survival_time_min4.199999999999993
No reset possible
6237813759Frank (Chude) Qian 🇨🇦CBC Net v1 - Best Lossaido-LFP-sim-validation350abortedyesreg020:07:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.92499999999999
in-drivable-lane_median0.574999999999998
driven_lanedir_consec_median1.078366209811518
deviation-center-line_median0.2301051484803983


other stats
agent_compute-ego0_max0.10981068866593498
agent_compute-ego0_mean0.1015037928462827
agent_compute-ego0_median0.100795637504041
agent_compute-ego0_min0.09461320771111384
complete-iteration_max0.6045013538428715
complete-iteration_mean0.5035355524801147
complete-iteration_median0.5216493968286362
complete-iteration_min0.3663420624203152
deviation-center-line_max0.3497599943191688
deviation-center-line_mean0.22948410944673656
deviation-center-line_min0.10796614650698064
deviation-heading_max1.449932893760358
deviation-heading_mean1.0689835811682782
deviation-heading_median1.1432419019775235
deviation-heading_min0.5395176269577073
driven_any_max2.127949823075878
driven_any_mean1.4392358704954915
driven_any_median1.547766445569265
driven_any_min0.5334607677675589
driven_lanedir_consec_max1.7921273131555495
driven_lanedir_consec_mean1.112162344806759
driven_lanedir_consec_min0.4997896464484508
driven_lanedir_max1.7921273131555495
driven_lanedir_mean1.112162344806759
driven_lanedir_median1.078366209811518
driven_lanedir_min0.4997896464484508
get_duckie_state_max0.09713983973231884
get_duckie_state_mean0.06685883261866983
get_duckie_state_median0.07745392616621401
get_duckie_state_min0.015387638409932456
get_robot_state_max0.01340262166091374
get_robot_state_mean0.012131456901840976
get_robot_state_median0.012047513706765937
get_robot_state_min0.011028178532918294
get_state_dump_max0.03682596981525421
get_state_dump_mean0.028661659133857584
get_state_dump_median0.02887052566957361
get_state_dump_min0.02007961538102892
get_ui_image_max0.06596504577568599
get_ui_image_mean0.0568600688448106
get_ui_image_median0.05780036466444912
get_ui_image_min0.04587450027465821
in-drivable-lane_max2.6999999999999913
in-drivable-lane_mean0.9624999999999968
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.127949823075878, "get_ui_image": 0.05432129999913207, "step_physics": 0.17339632926730936, "survival_time": 5.399999999999989, "driven_lanedir": 1.7921273131555495, "get_state_dump": 0.032656087787873156, "get_robot_state": 0.012522948991268053, "sim_render-ego0": 0.010143973411770042, "get_duckie_state": 0.09713983973231884, "in-drivable-lane": 0.4999999999999982, "deviation-heading": 1.185064742160636, "agent_compute-ego0": 0.10694846100763444, "complete-iteration": 0.5226839135546203, "set_robot_commands": 0.007553748034555978, "deviation-center-line": 0.3497599943191688, "driven_lanedir_consec": 1.7921273131555495, "sim_compute_sim_state": 0.021234164544201777, "sim_compute_performance-ego0": 0.006513026876187105}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5334607677675589, "get_ui_image": 0.06127942932976617, "step_physics": 0.1963036413545962, "survival_time": 2.6499999999999986, "driven_lanedir": 0.4997896464484508, "get_state_dump": 0.025084963551274053, "get_robot_state": 0.011572078422263815, "sim_render-ego0": 0.014231085777282717, "get_duckie_state": 0.07653826254385489, "in-drivable-lane": 0.0, "deviation-heading": 1.1014190617944113, "agent_compute-ego0": 0.09461320771111384, "complete-iteration": 0.5206148801026521, "set_robot_commands": 0.00924274656507704, "deviation-center-line": 0.2037958598628972, "driven_lanedir_consec": 0.4997896464484508, "sim_compute_sim_state": 0.022909786966111925, "sim_compute_performance-ego0": 0.00858938252484357}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.811286963800419, "get_ui_image": 0.06596504577568599, "step_physics": 0.23026799942765916, "survival_time": 5.549999999999988, "driven_lanedir": 1.502065148021555, "get_state_dump": 0.03682596981525421, "get_robot_state": 0.01340262166091374, "sim_render-ego0": 0.013046607375144958, "get_duckie_state": 0.07836958978857313, "in-drivable-lane": 0.6499999999999977, "deviation-heading": 1.449932893760358, "agent_compute-ego0": 0.10981068866593498, "complete-iteration": 0.6045013538428715, "set_robot_commands": 0.010606359158243452, "deviation-center-line": 0.2564144370978994, "driven_lanedir_consec": 1.502065148021555, "sim_compute_sim_state": 0.03499307802745274, "sim_compute_performance-ego0": 0.01096510248524802}, "LFP-norm-small_loop-000-ego0": {"driven_any": 1.284245927338111, "get_ui_image": 0.04587450027465821, "step_physics": 0.1352629237704807, "survival_time": 4.449999999999992, "driven_lanedir": 0.6546672716014805, "get_state_dump": 0.02007961538102892, "get_robot_state": 0.011028178532918294, "sim_render-ego0": 0.00998327996995714, "get_duckie_state": 0.015387638409932456, "in-drivable-lane": 2.6999999999999913, "deviation-heading": 0.5395176269577073, "agent_compute-ego0": 0.0946428140004476, "complete-iteration": 0.3663420624203152, "set_robot_commands": 0.007193371984693739, "deviation-center-line": 0.10796614650698064, "driven_lanedir_consec": 0.6546672716014805, "sim_compute_sim_state": 0.01836709976196289, "sim_compute_performance-ego0": 0.008288786146375867}}
set_robot_commands_max0.010606359158243452
set_robot_commands_mean0.008649056435642552
set_robot_commands_median0.00839824729981651
set_robot_commands_min0.007193371984693739
sim_compute_performance-ego0_max0.01096510248524802
sim_compute_performance-ego0_mean0.008589074508163642
sim_compute_performance-ego0_median0.008439084335609718
sim_compute_performance-ego0_min0.006513026876187105
sim_compute_sim_state_max0.03499307802745274
sim_compute_sim_state_mean0.024376032324932333
sim_compute_sim_state_median0.022071975755156847
sim_compute_sim_state_min0.01836709976196289
sim_render-ego0_max0.014231085777282717
sim_render-ego0_mean0.011851236633538714
sim_render-ego0_median0.0115952903934575
sim_render-ego0_min0.00998327996995714
simulation-passed1
step_physics_max0.23026799942765916
step_physics_mean0.18380772345501137
step_physics_median0.18484998531095276
step_physics_min0.1352629237704807
survival_time_max5.549999999999988
survival_time_mean4.512499999999992
survival_time_min2.6499999999999986
No reset possible
6236713754Frank (Chude) Qian 🇨🇦CBC Net v1aido-LF-sim-validation347successyesreg020:40:46
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median13.087925508056257
survival_time_median44.89999999999951
deviation-center-line_median2.8049197211703527
in-drivable-lane_median5.825000000000035


other stats
agent_compute-ego0_max0.09524312927609398
agent_compute-ego0_mean0.09345337015344692
agent_compute-ego0_median0.09423379824620104
agent_compute-ego0_min0.09010275484529172
complete-iteration_max0.44320640452143734
complete-iteration_mean0.42761473049400034
complete-iteration_median0.429346375024686
complete-iteration_min0.40855976740519206
deviation-center-line_max4.591843933816273
deviation-center-line_mean2.5788278750385656
deviation-center-line_min0.11362812399728472
deviation-heading_max21.363012825327445
deviation-heading_mean10.302930625784011
deviation-heading_median9.562096389437963
deviation-heading_min0.7245168989326686
driven_any_max27.205850997519203
driven_any_mean16.24502348158073
driven_any_median18.02672308708908
driven_any_min1.72079675462557
driven_lanedir_consec_max24.83495469004709
driven_lanedir_consec_mean12.907479038817163
driven_lanedir_consec_min0.6191104491090598
driven_lanedir_max24.83495469004709
driven_lanedir_mean12.907479038817163
driven_lanedir_median13.087925508056257
driven_lanedir_min0.6191104491090598
get_duckie_state_max4.00785403287381e-06
get_duckie_state_mean3.896828931136047e-06
get_duckie_state_median3.932302351025528e-06
get_duckie_state_min3.71485698961932e-06
get_robot_state_max0.013819126761227624
get_robot_state_mean0.012829100865197176
get_robot_state_median0.013081694299670574
get_robot_state_min0.011333888100219932
get_state_dump_max0.020228182247139632
get_state_dump_mean0.01736393512562341
get_state_dump_median0.017292487694569494
get_state_dump_min0.014642582866215026
get_ui_image_max0.0642131351289295
get_ui_image_mean0.05387491990348181
get_ui_image_median0.05217325490620853
get_ui_image_min0.04694003467258069
in-drivable-lane_max9.29999999999977
in-drivable-lane_mean6.074999999999957
in-drivable-lane_min3.3499999999999885
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.205850997519203, "get_ui_image": 0.05010327987130933, "step_physics": 0.20406513329251025, "survival_time": 59.99999999999873, "driven_lanedir": 24.83495469004709, "get_state_dump": 0.020228182247139632, "get_robot_state": 0.013536969986088963, "sim_render-ego0": 0.010859133500441424, "get_duckie_state": 4.00785403287381e-06, "in-drivable-lane": 3.999999999999961, "deviation-heading": 11.384700885403191, "agent_compute-ego0": 0.09357896772252829, "complete-iteration": 0.4333869243640884, "set_robot_commands": 0.009265205445237998, "deviation-center-line": 4.591843933816273, "driven_lanedir_consec": 24.83495469004709, "sim_compute_sim_state": 0.024420758667436863, "sim_compute_performance-ego0": 0.007131007589170279}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.72079675462557, "get_ui_image": 0.0642131351289295, "step_physics": 0.17424098423549106, "survival_time": 5.1999999999999895, "driven_lanedir": 0.6191104491090598, "get_state_dump": 0.015235950833275204, "get_robot_state": 0.012626418613252186, "sim_render-ego0": 0.009958596456618536, "get_duckie_state": 3.912335350399925e-06, "in-drivable-lane": 3.3499999999999885, "deviation-heading": 0.7245168989326686, "agent_compute-ego0": 0.09524312927609398, "complete-iteration": 0.40855976740519206, "set_robot_commands": 0.008879264195760092, "deviation-center-line": 0.11362812399728472, "driven_lanedir_consec": 0.6191104491090598, "sim_compute_sim_state": 0.022559284028552826, "sim_compute_performance-ego0": 0.005402117683773949}, "LF-norm-techtrack-000-ego0": {"driven_any": 11.612838119518354, "get_ui_image": 0.05424322994110772, "step_physics": 0.22142767706508412, "survival_time": 29.80000000000029, "driven_lanedir": 7.9295959005599, "get_state_dump": 0.014642582866215026, "get_robot_state": 0.011333888100219932, "sim_render-ego0": 0.00990416336698548, "get_duckie_state": 3.71485698961932e-06, "in-drivable-lane": 7.650000000000109, "deviation-heading": 7.739491893472735, "agent_compute-ego0": 0.09010275484529172, "complete-iteration": 0.44320640452143734, "set_robot_commands": 0.007487141706636004, "deviation-center-line": 1.781083606360325, "driven_lanedir_consec": 7.9295959005599, "sim_compute_sim_state": 0.027778495296561335, "sim_compute_performance-ego0": 0.00609933910657413}, "LF-norm-small_loop-000-ego0": {"driven_any": 24.4406080546598, "get_ui_image": 0.04694003467258069, "step_physics": 0.20011698812568912, "survival_time": 59.99999999999873, "driven_lanedir": 18.24625511555261, "get_state_dump": 0.019349024555863785, "get_robot_state": 0.013819126761227624, "sim_render-ego0": 0.011566025728389284, "get_duckie_state": 3.952269351651131e-06, "in-drivable-lane": 9.29999999999977, "deviation-heading": 21.363012825327445, "agent_compute-ego0": 0.09488862876987378, "complete-iteration": 0.4253058256852835, "set_robot_commands": 0.00986468643074131, "deviation-center-line": 3.82875583598038, "driven_lanedir_consec": 18.24625511555261, "sim_compute_sim_state": 0.021461113604975185, "sim_compute_performance-ego0": 0.007104559802294373}}
set_robot_commands_max0.00986468643074131
set_robot_commands_mean0.00887407444459385
set_robot_commands_median0.009072234820499046
set_robot_commands_min0.007487141706636004
sim_compute_performance-ego0_max0.007131007589170279
sim_compute_performance-ego0_mean0.006434256045453183
sim_compute_performance-ego0_median0.006601949454434252
sim_compute_performance-ego0_min0.005402117683773949
sim_compute_sim_state_max0.027778495296561335
sim_compute_sim_state_mean0.024054912899381553
sim_compute_sim_state_median0.023490021347994845
sim_compute_sim_state_min0.021461113604975185
sim_render-ego0_max0.011566025728389284
sim_render-ego0_mean0.01057197976310868
sim_render-ego0_median0.01040886497852998
sim_render-ego0_min0.00990416336698548
simulation-passed1
step_physics_max0.22142767706508412
step_physics_mean0.19996269567969369
step_physics_median0.20209106070909968
step_physics_min0.17424098423549106
survival_time_max59.99999999999873
survival_time_mean38.74999999999943
survival_time_min5.1999999999999895
No reset possible
6236513511András Kalapos 🇭🇺real-v1.0-3091-310aido-LFP-sim-validation350successyesreg020:06:38
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.82499999999999
in-drivable-lane_median0.0
driven_lanedir_consec_median1.7807603552476678
deviation-center-line_median0.2125044631694849


other stats
agent_compute-ego0_max0.03423274435648104
agent_compute-ego0_mean0.03228880108893599
agent_compute-ego0_median0.03225643214891667
agent_compute-ego0_min0.03040959570142958
complete-iteration_max0.4904948065920574
complete-iteration_mean0.4420775635350581
complete-iteration_median0.4706707137875853
complete-iteration_min0.33647401997300447
deviation-center-line_max0.25030702338391125
deviation-center-line_mean0.20580105728634104
deviation-center-line_min0.14788827942248295
deviation-heading_max1.0746077097900149
deviation-heading_mean0.8339647427100714
deviation-heading_median0.8497564759944369
deviation-heading_min0.5617383090613964
driven_any_max2.434104267065927
driven_any_mean1.6471846881772971
driven_any_median1.8103105967594055
driven_any_min0.5340132921244504
driven_lanedir_consec_max2.392583861915514
driven_lanedir_consec_mean1.5906307887597428
driven_lanedir_consec_min0.40841858262812114
driven_lanedir_max2.392583861915514
driven_lanedir_mean1.5906307887597428
driven_lanedir_median1.7807603552476678
driven_lanedir_min0.40841858262812114
get_duckie_state_max0.07982875393555228
get_duckie_state_mean0.05495002182747045
get_duckie_state_median0.06398528077738072
get_duckie_state_min0.012000771819568072
get_robot_state_max0.01256347762213813
get_robot_state_mean0.011807057901233676
get_robot_state_median0.011950266404134758
get_robot_state_min0.01076422117452706
get_state_dump_max0.027374895607552876
get_state_dump_mean0.02340002415750089
get_state_dump_median0.02376242574806176
get_state_dump_min0.018700349526327164
get_ui_image_max0.0625195026397705
get_ui_image_mean0.05458469701845128
get_ui_image_median0.05451817411391441
get_ui_image_min0.04678293720620577
in-drivable-lane_max0.20000000000000007
in-drivable-lane_mean0.05000000000000002
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.158436217797806, "get_ui_image": 0.05281918660729333, "step_physics": 0.21832187196849723, "survival_time": 5.599999999999988, "driven_lanedir": 2.122501667991422, "get_state_dump": 0.025186344585587495, "get_robot_state": 0.01076422117452706, "sim_render-ego0": 0.009240551332456876, "get_duckie_state": 0.07982875393555228, "in-drivable-lane": 0.0, "deviation-heading": 1.0276683277994243, "agent_compute-ego0": 0.033938034445838594, "complete-iteration": 0.46576645732980915, "set_robot_commands": 0.0070303541369142785, "deviation-center-line": 0.2447953716748299, "driven_lanedir_consec": 2.122501667991422, "sim_compute_sim_state": 0.021389026557449745, "sim_compute_performance-ego0": 0.007030216993483822}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5340132921244504, "get_ui_image": 0.0625195026397705, "step_physics": 0.23725022739834256, "survival_time": 2.2, "driven_lanedir": 0.40841858262812114, "get_state_dump": 0.022338506910536023, "get_robot_state": 0.01256347762213813, "sim_render-ego0": 0.01214906374613444, "get_duckie_state": 0.06048634847005208, "in-drivable-lane": 0.20000000000000007, "deviation-heading": 0.6718446241894495, "agent_compute-ego0": 0.03040959570142958, "complete-iteration": 0.4755749702453614, "set_robot_commands": 0.009253189298841688, "deviation-center-line": 0.14788827942248295, "driven_lanedir_consec": 0.40841858262812114, "sim_compute_sim_state": 0.019869539472791884, "sim_compute_performance-ego0": 0.00852453973558214}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.462184975721005, "get_ui_image": 0.0562171616205355, "step_physics": 0.24022969676227104, "survival_time": 4.049999999999994, "driven_lanedir": 1.4390190425039138, "get_state_dump": 0.027374895607552876, "get_robot_state": 0.011780203842535251, "sim_render-ego0": 0.010511247123160011, "get_duckie_state": 0.06748421308470935, "in-drivable-lane": 0.0, "deviation-heading": 0.5617383090613964, "agent_compute-ego0": 0.03423274435648104, "complete-iteration": 0.4904948065920574, "set_robot_commands": 0.006079007939594548, "deviation-center-line": 0.1802135546641399, "driven_lanedir_consec": 1.4390190425039138, "sim_compute_sim_state": 0.02979903977091719, "sim_compute_performance-ego0": 0.006563413433912323}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.434104267065927, "get_ui_image": 0.04678293720620577, "step_physics": 0.17565996920476196, "survival_time": 6.0499999999999865, "driven_lanedir": 2.392583861915514, "get_state_dump": 0.018700349526327164, "get_robot_state": 0.012120328965734265, "sim_render-ego0": 0.009067424008103668, "get_duckie_state": 0.012000771819568072, "in-drivable-lane": 0.0, "deviation-heading": 1.0746077097900149, "agent_compute-ego0": 0.03057482985199475, "complete-iteration": 0.33647401997300447, "set_robot_commands": 0.00690889163095443, "deviation-center-line": 0.25030702338391125, "driven_lanedir_consec": 2.392583861915514, "sim_compute_sim_state": 0.015746484037305487, "sim_compute_performance-ego0": 0.008703331478306504}}
set_robot_commands_max0.009253189298841688
set_robot_commands_mean0.007317860751576236
set_robot_commands_median0.0069696228839343545
set_robot_commands_min0.006079007939594548
sim_compute_performance-ego0_max0.008703331478306504
sim_compute_performance-ego0_mean0.007705375410321197
sim_compute_performance-ego0_median0.007777378364532981
sim_compute_performance-ego0_min0.006563413433912323
sim_compute_sim_state_max0.02979903977091719
sim_compute_sim_state_mean0.021701022459616075
sim_compute_sim_state_median0.020629283015120813
sim_compute_sim_state_min0.015746484037305487
sim_render-ego0_max0.01214906374613444
sim_render-ego0_mean0.010242071552463749
sim_render-ego0_median0.009875899227808444
sim_render-ego0_min0.009067424008103668
simulation-passed1
step_physics_max0.24022969676227104
step_physics_mean0.2178654413334682
step_physics_median0.2277860496834199
step_physics_min0.17565996920476196
survival_time_max6.0499999999999865
survival_time_mean4.474999999999992
survival_time_min2.2
No reset possible
6236313518András Kalapos 🇭🇺real-v1.0-3091-310aido-LFV_multi-sim-validation356failedyesreg020:01:55
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6236213518András Kalapos 🇭🇺real-v1.0-3091-310aido-LFV_multi-sim-validation356failedyesreg020:02:03
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6236113518András Kalapos 🇭🇺real-v1.0-3091-310aido-LFV_multi-sim-validation356failedyesreg020:02:03
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6235913526András Kalapos 🇭🇺real-v1.0-3092-363aido-LFP-sim-validation350successyesreg020:06:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.04999999999999
in-drivable-lane_median0.0
driven_lanedir_consec_median1.9593204785449303
deviation-center-line_median0.23830090791283623


other stats
agent_compute-ego0_max0.037245718638102214
agent_compute-ego0_mean0.03503768114510519
agent_compute-ego0_median0.03504106052356755
agent_compute-ego0_min0.032822884895183424
complete-iteration_max0.5104097872972488
complete-iteration_mean0.4482669417733056
complete-iteration_median0.4712564550064228
complete-iteration_min0.34014506978312814
deviation-center-line_max0.3528529324499461
deviation-center-line_mean0.25304811796713617
deviation-center-line_min0.18273772359292612
deviation-heading_max0.9015726047438732
deviation-heading_mean0.7552447418226131
deviation-heading_median0.7634416887969286
deviation-heading_min0.592522984952722
driven_any_max2.5756950910974044
driven_any_mean1.7739016714869709
driven_any_median1.9859607253994271
driven_any_min0.5479901440516255
driven_lanedir_consec_max2.549101309173078
driven_lanedir_consec_mean1.7043317400964968
driven_lanedir_consec_min0.3495846941230478
driven_lanedir_max2.549101309173078
driven_lanedir_mean1.7043317400964968
driven_lanedir_median1.9593204785449303
driven_lanedir_min0.3495846941230478
get_duckie_state_max0.08570938419412684
get_duckie_state_mean0.05998241878898212
get_duckie_state_median0.07034056625432439
get_duckie_state_min0.013539158453152874
get_robot_state_max0.012790878613789875
get_robot_state_mean0.012267156236023622
get_robot_state_median0.012259168757332696
get_robot_state_min0.011759408815639224
get_state_dump_max0.02521308483900847
get_state_dump_mean0.02337679630506661
get_state_dump_median0.024908001720905303
get_state_dump_min0.01847809693944736
get_ui_image_max0.06558592584398058
get_ui_image_mean0.057290951660585746
get_ui_image_median0.05798748880624771
get_ui_image_min0.04760290318586695
in-drivable-lane_max0.3500000000000002
in-drivable-lane_mean0.08750000000000005
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.1626001152756587, "get_ui_image": 0.05467542012532552, "step_physics": 0.2040079258106373, "survival_time": 5.349999999999989, "driven_lanedir": 2.1374551883253323, "get_state_dump": 0.02521308483900847, "get_robot_state": 0.012790878613789875, "sim_render-ego0": 0.012308211238295943, "get_duckie_state": 0.08570938419412684, "in-drivable-lane": 0.0, "deviation-heading": 0.8447213492321968, "agent_compute-ego0": 0.032822884895183424, "complete-iteration": 0.4600662235860471, "set_robot_commands": 0.00794298119015164, "deviation-center-line": 0.26490646997124373, "driven_lanedir_consec": 2.1374551883253323, "sim_compute_sim_state": 0.01878176795111762, "sim_compute_performance-ego0": 0.005585321673640498}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5479901440516255, "get_ui_image": 0.06558592584398058, "step_physics": 0.22704813215467665, "survival_time": 2.2, "driven_lanedir": 0.3495846941230478, "get_state_dump": 0.02478028933207194, "get_robot_state": 0.012485053804185657, "sim_render-ego0": 0.009172889921400284, "get_duckie_state": 0.06570533646477593, "in-drivable-lane": 0.3500000000000002, "deviation-heading": 0.592522984952722, "agent_compute-ego0": 0.037245718638102214, "complete-iteration": 0.4824466864267985, "set_robot_commands": 0.008246723810831707, "deviation-center-line": 0.18273772359292612, "driven_lanedir_consec": 0.3495846941230478, "sim_compute_sim_state": 0.023224751154581707, "sim_compute_performance-ego0": 0.008722427156236437}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.809321335523196, "get_ui_image": 0.0612995574871699, "step_physics": 0.2466374039649963, "survival_time": 4.749999999999991, "driven_lanedir": 1.7811857687645285, "get_state_dump": 0.025035714109738667, "get_robot_state": 0.012033283710479736, "sim_render-ego0": 0.009580289324124656, "get_duckie_state": 0.07497579604387283, "in-drivable-lane": 0.0, "deviation-heading": 0.6821620283616605, "agent_compute-ego0": 0.03540161748727163, "complete-iteration": 0.5104097872972488, "set_robot_commands": 0.00914417952299118, "deviation-center-line": 0.2116953458544288, "driven_lanedir_consec": 1.7811857687645285, "sim_compute_sim_state": 0.029024221003055573, "sim_compute_performance-ego0": 0.007052679856618245}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.5756950910974044, "get_ui_image": 0.04760290318586695, "step_physics": 0.17255732956833728, "survival_time": 6.299999999999986, "driven_lanedir": 2.549101309173078, "get_state_dump": 0.01847809693944736, "get_robot_state": 0.011759408815639224, "sim_render-ego0": 0.010691229752668244, "get_duckie_state": 0.013539158453152874, "in-drivable-lane": 0.0, "deviation-heading": 0.9015726047438732, "agent_compute-ego0": 0.034680503559863476, "complete-iteration": 0.34014506978312814, "set_robot_commands": 0.008635734948586292, "deviation-center-line": 0.3528529324499461, "driven_lanedir_consec": 2.549101309173078, "sim_compute_sim_state": 0.015632295233058178, "sim_compute_performance-ego0": 0.006348731949573427}}
set_robot_commands_max0.00914417952299118
set_robot_commands_mean0.008492404868140204
set_robot_commands_median0.008441229379708998
set_robot_commands_min0.00794298119015164
sim_compute_performance-ego0_max0.008722427156236437
sim_compute_performance-ego0_mean0.006927290159017152
sim_compute_performance-ego0_median0.0067007059030958355
sim_compute_performance-ego0_min0.005585321673640498
sim_compute_sim_state_max0.029024221003055573
sim_compute_sim_state_mean0.02166575883545327
sim_compute_sim_state_median0.021003259552849665
sim_compute_sim_state_min0.015632295233058178
sim_render-ego0_max0.012308211238295943
sim_render-ego0_mean0.01043815505912228
sim_render-ego0_median0.01013575953839645
sim_render-ego0_min0.009172889921400284
simulation-passed1
step_physics_max0.2466374039649963
step_physics_mean0.2125626978746619
step_physics_median0.21552802898265697
step_physics_min0.17255732956833728
survival_time_max6.299999999999986
survival_time_mean4.6499999999999915
survival_time_min2.2
No reset possible
6235213543András Kalapos 🇭🇺3090aido-LFV-sim-validation354successyesreg020:33:47
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median9.125000000000002
in-drivable-lane_median0.0
driven_lanedir_consec_median3.7774444823176174
deviation-center-line_median0.38905462673303226


other stats
agent_compute-ego0_max0.03597261596890265
agent_compute-ego0_mean0.03444380947989982
agent_compute-ego0_median0.034884395433420486
agent_compute-ego0_min0.032033831083855664
agent_compute-npc0_max0.08070842911597012
agent_compute-npc0_mean0.0707315090641091
agent_compute-npc0_median0.06999670077856615
agent_compute-npc0_min0.06222420558333397
complete-iteration_max1.885001434135842
complete-iteration_mean1.226869231097848
complete-iteration_median1.215947299218871
complete-iteration_min0.5905808918178082
deviation-center-line_max0.9419496488367314
deviation-center-line_mean0.4849446024904949
deviation-center-line_min0.21971950765918372
deviation-heading_max3.851988618657838
deviation-heading_mean1.9539208285368872
deviation-heading_median1.56366551162257
deviation-heading_min0.8363636722445723
driven_any_max10.537812964017052
driven_any_mean5.193516423753788
driven_any_median3.863845700866004
driven_any_min2.508561329266092
driven_lanedir_consec_max10.329569983640722
driven_lanedir_consec_mean5.090985952229395
driven_lanedir_consec_min2.4794848606416227
driven_lanedir_max10.329569983640722
driven_lanedir_mean5.090985952229395
driven_lanedir_median3.7774444823176174
driven_lanedir_min2.4794848606416227
get_duckie_state_max3.871050747958097e-06
get_duckie_state_mean3.5959206717497896e-06
get_duckie_state_median3.534300529056511e-06
get_duckie_state_min3.4440308809280396e-06
get_robot_state_max0.06628738455220956
get_robot_state_mean0.050876415758595125
get_robot_state_median0.0583606691209535
get_robot_state_min0.02049694024026394
get_state_dump_max0.034965328618782716
get_state_dump_mean0.03053344856861216
get_state_dump_median0.029752550324609536
get_state_dump_min0.02766336500644684
get_ui_image_max0.092848172359912
get_ui_image_mean0.07805844528764416
get_ui_image_median0.08226178015686511
get_ui_image_min0.05486204847693443
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 4.655485786475105, "get_ui_image": 0.07599395730278709, "step_physics": 0.46224250793457033, "survival_time": 10.95000000000002, "driven_lanedir": 4.518307794054724, "get_state_dump": 0.02795258543708108, "get_robot_state": 0.05183791680769487, "sim_render-ego0": 0.01353457190773704, "sim_render-npc0": 0.011351552876559172, "sim_render-npc1": 0.011054586280475964, "sim_render-npc2": 0.011278256503018466, "get_duckie_state": 3.871050747958097e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.1258062121455565, "agent_compute-ego0": 0.03537544120441784, "agent_compute-npc0": 0.07097339955243197, "agent_compute-npc1": 0.07121495116840709, "agent_compute-npc2": 0.08680193640969017, "complete-iteration": 1.0391601649197666, "set_robot_commands": 0.008502364158630371, "deviation-center-line": 0.4656747842534762, "driven_lanedir_consec": 4.518307794054724, "sim_compute_sim_state": 0.04831180789253928, "sim_compute_performance-ego0": 0.0063727519728920675, "sim_compute_performance-npc0": 0.0066992402076721195, "sim_compute_performance-npc1": 0.006160469488664107, "sim_compute_performance-npc2": 0.006441969221288508}, "LFV-norm-zigzag-000-ego0": {"driven_any": 10.537812964017052, "get_ui_image": 0.092848172359912, "step_physics": 1.1405500656986438, "survival_time": 23.5000000000002, "driven_lanedir": 10.329569983640722, "get_state_dump": 0.03155251521213799, "get_robot_state": 0.06488342143421214, "sim_render-ego0": 0.010848929674508971, "sim_render-npc0": 0.010409623953946836, "sim_render-npc1": 0.010925166925806909, "sim_render-npc2": 0.010506238653907856, "sim_render-npc3": 0.010453736453046212, "get_duckie_state": 3.5945017626331113e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.851988618657838, "agent_compute-ego0": 0.03597261596890265, "agent_compute-npc0": 0.06902000200470035, "agent_compute-npc1": 0.07194677431871936, "agent_compute-npc2": 0.07485081334529156, "agent_compute-npc3": 0.0698170545501061, "complete-iteration": 1.885001434135842, "set_robot_commands": 0.00838162286519498, "deviation-center-line": 0.9419496488367314, "driven_lanedir_consec": 10.329569983640722, "sim_compute_sim_state": 0.10544544876001444, "sim_compute_performance-ego0": 0.006815848077178761, "sim_compute_performance-npc0": 0.007479064277276365, "sim_compute_performance-npc1": 0.0065976550877727014, "sim_compute_performance-npc2": 0.006475285613106568, "sim_compute_performance-npc3": 0.006837637814240344}, "LFV-norm-techtrack-000-ego0": {"driven_any": 3.072205615256902, "get_ui_image": 0.08852960301094315, "step_physics": 0.6216416310290901, "survival_time": 7.299999999999982, "driven_lanedir": 3.0365811705805106, "get_state_dump": 0.034965328618782716, "get_robot_state": 0.06628738455220956, "sim_render-ego0": 0.00962151151125123, "sim_render-npc0": 0.011350059184898323, "sim_render-npc1": 0.010589406603858584, "sim_render-npc2": 0.013634331372319435, "sim_render-npc3": 0.011664693858347782, "get_duckie_state": 3.4740992954799107e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.0015248110995834, "agent_compute-ego0": 0.032033831083855664, "agent_compute-npc0": 0.08070842911597012, "agent_compute-npc1": 0.0819012924116485, "agent_compute-npc2": 0.07617696775060122, "agent_compute-npc3": 0.0826807638414863, "complete-iteration": 1.392734433517975, "set_robot_commands": 0.00843611055490922, "deviation-center-line": 0.31243446921258833, "driven_lanedir_consec": 3.0365811705805106, "sim_compute_sim_state": 0.0949022948336439, "sim_compute_performance-ego0": 0.006971956110324989, "sim_compute_performance-npc0": 0.006794577553158715, "sim_compute_performance-npc1": 0.006700038909912109, "sim_compute_performance-npc2": 0.006123095142598055, "sim_compute_performance-npc3": 0.006233836517852991}, "LFV-norm-small_loop-000-ego0": {"driven_any": 2.508561329266092, "get_ui_image": 0.05486204847693443, "step_physics": 0.3170172553509474, "survival_time": 6.349999999999985, "driven_lanedir": 2.4794848606416227, "get_state_dump": 0.02766336500644684, "get_robot_state": 0.02049694024026394, "sim_render-ego0": 0.010109128430485724, "sim_render-npc0": 0.009894292801618576, "get_duckie_state": 3.4440308809280396e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.8363636722445723, "agent_compute-ego0": 0.034393349662423134, "agent_compute-npc0": 0.06222420558333397, "complete-iteration": 0.5905808918178082, "set_robot_commands": 0.008094081655144691, "deviation-center-line": 0.21971950765918372, "driven_lanedir_consec": 2.4794848606416227, "sim_compute_sim_state": 0.024923795834183693, "sim_compute_performance-ego0": 0.005902526900172234, "sim_compute_performance-npc0": 0.006662817671895027}}
set_robot_commands_max0.008502364158630371
set_robot_commands_mean0.008353544808469816
set_robot_commands_median0.0084088667100521
set_robot_commands_min0.008094081655144691
sim_compute_performance-ego0_max0.006971956110324989
sim_compute_performance-ego0_mean0.006515770765142013
sim_compute_performance-ego0_median0.006594300025035414
sim_compute_performance-ego0_min0.005902526900172234
sim_compute_performance-npc0_max0.007479064277276365
sim_compute_performance-npc0_mean0.006908924927500556
sim_compute_performance-npc0_median0.006746908880415417
sim_compute_performance-npc0_min0.006662817671895027
sim_compute_sim_state_max0.10544544876001444
sim_compute_sim_state_mean0.06839583683009533
sim_compute_sim_state_median0.07160705136309159
sim_compute_sim_state_min0.024923795834183693
sim_render-ego0_max0.01353457190773704
sim_render-ego0_mean0.01102853538099574
sim_render-ego0_median0.010479029052497348
sim_render-ego0_min0.00962151151125123
sim_render-npc0_max0.011351552876559172
sim_render-npc0_mean0.010751382204255727
sim_render-npc0_median0.01087984156942258
sim_render-npc0_min0.009894292801618576
simulation-passed1
step_physics_max1.1405500656986438
step_physics_mean0.6353628650033128
step_physics_median0.5419420694818302
step_physics_min0.3170172553509474
survival_time_max23.5000000000002
survival_time_mean12.025000000000048
survival_time_min6.349999999999985
No reset possible
6233913579Andras Beres202-1aido-LF-sim-testing348successyesreg021:05:50
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.84619913006924
survival_time_median59.99999999999873
deviation-center-line_median3.970203078397303
in-drivable-lane_median1.4749999999999628


other stats
agent_compute-ego0_max0.1341736217422549
agent_compute-ego0_mean0.12578007397703286
agent_compute-ego0_median0.12349254146007375
agent_compute-ego0_min0.12196159124572908
complete-iteration_max0.5720763921142121
complete-iteration_mean0.4877136431566186
complete-iteration_median0.47778683667575983
complete-iteration_min0.4232045071607426
deviation-center-line_max4.127710224362671
deviation-center-line_mean3.945267088382845
deviation-center-line_min3.7129519723740994
deviation-heading_max10.501109418167191
deviation-heading_mean9.145659784394956
deviation-heading_median9.306303549683657
deviation-heading_min7.468922620045325
driven_any_max30.929537302909985
driven_any_mean29.555792723201492
driven_any_median29.87079644146479
driven_any_min27.5520407069664
driven_lanedir_consec_max30.346691393765976
driven_lanedir_consec_mean28.51160341809545
driven_lanedir_consec_min26.007324018477345
driven_lanedir_max30.346691393765976
driven_lanedir_mean28.51160341809545
driven_lanedir_median28.84619913006924
driven_lanedir_min26.007324018477345
get_duckie_state_max3.265600021832392e-06
get_duckie_state_mean3.095868227384569e-06
get_duckie_state_median3.105198513161233e-06
get_duckie_state_min2.9074758613834176e-06
get_robot_state_max0.014628841319151663
get_robot_state_mean0.012852531289379365
get_robot_state_median0.012748646894958394
get_robot_state_min0.011283990048449006
get_state_dump_max0.017818061437932377
get_state_dump_mean0.01636139295381074
get_state_dump_median0.016399284882112706
get_state_dump_min0.014828940613085186
get_ui_image_max0.06253422964224709
get_ui_image_mean0.05440576378252981
get_ui_image_median0.053502009969071285
get_ui_image_min0.048084805549729576
in-drivable-lane_max2.599999999999927
in-drivable-lane_mean1.387499999999963
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 30.929537302909985, "get_ui_image": 0.04875574084146136, "step_physics": 0.18003123269093027, "survival_time": 59.99999999999873, "driven_lanedir": 30.346691393765976, "get_state_dump": 0.016594274355708115, "get_robot_state": 0.014628841319151663, "sim_render-ego0": 0.011207368550550729, "get_duckie_state": 3.265600021832392e-06, "in-drivable-lane": 0.4999999999999858, "deviation-heading": 7.468922620045325, "agent_compute-ego0": 0.12196159124572908, "complete-iteration": 0.43121189042789354, "set_robot_commands": 0.006461008899317097, "deviation-center-line": 3.7129519723740994, "driven_lanedir_consec": 30.346691393765976, "sim_compute_sim_state": 0.024034436596720343, "sim_compute_performance-ego0": 0.007338520886995314}, "LF-norm-zigzag-000-ego0": {"driven_any": 27.5520407069664, "get_ui_image": 0.06253422964224709, "step_physics": 0.2984433849090144, "survival_time": 59.99999999999873, "driven_lanedir": 26.007324018477345, "get_state_dump": 0.017818061437932377, "get_robot_state": 0.01294851739837367, "sim_render-ego0": 0.011096315121869064, "get_duckie_state": 3.0924934431674775e-06, "in-drivable-lane": 2.44999999999994, "deviation-heading": 10.501109418167191, "agent_compute-ego0": 0.12329137414619389, "complete-iteration": 0.5720763921142121, "set_robot_commands": 0.007016805486814068, "deviation-center-line": 3.9997216720454754, "driven_lanedir_consec": 26.007324018477345, "sim_compute_sim_state": 0.03171412176534, "sim_compute_performance-ego0": 0.00701269380853734}, "LF-norm-techtrack-000-ego0": {"driven_any": 29.7144936538206, "get_ui_image": 0.05824827909668121, "step_physics": 0.25343568140422174, "survival_time": 59.99999999999873, "driven_lanedir": 28.258434951851065, "get_state_dump": 0.014828940613085186, "get_robot_state": 0.011283990048449006, "sim_render-ego0": 0.009954556934442449, "get_duckie_state": 2.9074758613834176e-06, "in-drivable-lane": 2.599999999999927, "deviation-heading": 8.742173693703313, "agent_compute-ego0": 0.1341736217422549, "complete-iteration": 0.5243617829236261, "set_robot_commands": 0.0063958233540302315, "deviation-center-line": 3.9406844847491302, "driven_lanedir_consec": 28.258434951851065, "sim_compute_sim_state": 0.02910862119866847, "sim_compute_performance-ego0": 0.0067445791134925605}, "LF-norm-small_loop-000-ego0": {"driven_any": 30.02709922910898, "get_ui_image": 0.048084805549729576, "step_physics": 0.17991989895664187, "survival_time": 59.99999999999873, "driven_lanedir": 29.43396330828741, "get_state_dump": 0.016204295408517297, "get_robot_state": 0.012548776391543118, "sim_render-ego0": 0.01059615066109053, "get_duckie_state": 3.117903583154988e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.870433405663997, "agent_compute-ego0": 0.12369370877395364, "complete-iteration": 0.4232045071607426, "set_robot_commands": 0.006630010549273717, "deviation-center-line": 4.127710224362671, "driven_lanedir_consec": 29.43396330828741, "sim_compute_sim_state": 0.017912273899303884, "sim_compute_performance-ego0": 0.007421195953712177}}
set_robot_commands_max0.007016805486814068
set_robot_commands_mean0.006625912072358779
set_robot_commands_median0.006545509724295407
set_robot_commands_min0.0063958233540302315
sim_compute_performance-ego0_max0.007421195953712177
sim_compute_performance-ego0_mean0.0071292474406843475
sim_compute_performance-ego0_median0.007175607347766327
sim_compute_performance-ego0_min0.0067445791134925605
sim_compute_sim_state_max0.03171412176534
sim_compute_sim_state_mean0.025692363365008176
sim_compute_sim_state_median0.026571528897694405
sim_compute_sim_state_min0.017912273899303884
sim_render-ego0_max0.011207368550550729
sim_render-ego0_mean0.010713597816988194
sim_render-ego0_median0.010846232891479795
sim_render-ego0_min0.009954556934442449
simulation-passed1
step_physics_max0.2984433849090144
step_physics_mean0.22795754949020203
step_physics_median0.216733457047576
step_physics_min0.17991989895664187
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6233813586Andras Beres202-1aido-LFP-sim-validation350successyesreg020:07:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.674999999999992
in-drivable-lane_median0.0
driven_lanedir_consec_median1.8108358619333904
deviation-center-line_median0.29566379245731533


other stats
agent_compute-ego0_max0.13346629447125374
agent_compute-ego0_mean0.1258509320386655
agent_compute-ego0_median0.12526798380840798
agent_compute-ego0_min0.1194014660665922
complete-iteration_max0.6308442125929162
complete-iteration_mean0.5464685857326619
complete-iteration_median0.5510884628824148
complete-iteration_min0.452853204572902
deviation-center-line_max0.4257199605729894
deviation-center-line_mean0.3022154764963995
deviation-center-line_min0.19181436049797795
deviation-heading_max1.214770654593495
deviation-heading_mean0.7660048699036226
deviation-heading_median0.7328560647892886
deviation-heading_min0.38353669544241825
driven_any_max2.646667534116804
driven_any_mean1.7157455667398422
driven_any_median1.837712427899878
driven_any_min0.540889877042809
driven_lanedir_consec_max2.593616185382574
driven_lanedir_consec_mean1.6873832356985865
driven_lanedir_consec_min0.53424503354499
driven_lanedir_max2.593616185382574
driven_lanedir_mean1.6873832356985865
driven_lanedir_median1.8108358619333904
driven_lanedir_min0.53424503354499
get_duckie_state_max0.08401360244394462
get_duckie_state_mean0.06190557807199842
get_duckie_state_median0.07474002206517305
get_duckie_state_min0.01412866571370293
get_robot_state_max0.012468317721752413
get_robot_state_mean0.011453207440705226
get_robot_state_median0.01167541895854003
get_robot_state_min0.009993674123988432
get_state_dump_max0.02474789084675156
get_state_dump_mean0.02154600309595405
get_state_dump_median0.022128656297752018
get_state_dump_min0.017178808941560632
get_ui_image_max0.07041559320815066
get_ui_image_mean0.05885441948531851
get_ui_image_median0.059644524616887525
get_ui_image_min0.04571303549934836
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.215345931084566, "get_ui_image": 0.05674900072757329, "step_physics": 0.18245203918385727, "survival_time": 5.299999999999989, "driven_lanedir": 2.192827296192858, "get_state_dump": 0.02474789084675156, "get_robot_state": 0.012276725234272324, "sim_render-ego0": 0.010412403356249088, "get_duckie_state": 0.08401360244394462, "in-drivable-lane": 0.0, "deviation-heading": 0.6869131618596741, "agent_compute-ego0": 0.1194014660665922, "complete-iteration": 0.5230597090498309, "set_robot_commands": 0.006231784820556641, "deviation-center-line": 0.31399878214877835, "driven_lanedir_consec": 2.192827296192858, "sim_compute_sim_state": 0.019443075233530775, "sim_compute_performance-ego0": 0.007129261426836531}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.540889877042809, "get_ui_image": 0.07041559320815066, "step_physics": 0.26032102868912066, "survival_time": 2.3, "driven_lanedir": 0.53424503354499, "get_state_dump": 0.021533626191159512, "get_robot_state": 0.012468317721752413, "sim_render-ego0": 0.012120622269650725, "get_duckie_state": 0.07962250202260118, "in-drivable-lane": 0.0, "deviation-heading": 0.38353669544241825, "agent_compute-ego0": 0.13346629447125374, "complete-iteration": 0.6308442125929162, "set_robot_commands": 0.005408941431248442, "deviation-center-line": 0.19181436049797795, "driven_lanedir_consec": 0.53424503354499, "sim_compute_sim_state": 0.026245107042028547, "sim_compute_performance-ego0": 0.009026593350349587}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.4600789247151902, "get_ui_image": 0.06254004850620176, "step_physics": 0.24299076126842964, "survival_time": 4.049999999999994, "driven_lanedir": 1.428844427673923, "get_state_dump": 0.022723686404344513, "get_robot_state": 0.011074112682807736, "sim_render-ego0": 0.01028843623835866, "get_duckie_state": 0.06985754210774492, "in-drivable-lane": 0.0, "deviation-heading": 0.778798967718903, "agent_compute-ego0": 0.12014060776408124, "complete-iteration": 0.5791172167149986, "set_robot_commands": 0.006133954699446515, "deviation-center-line": 0.27732880276585226, "driven_lanedir_consec": 1.428844427673923, "sim_compute_sim_state": 0.026843085521604956, "sim_compute_performance-ego0": 0.0063269777995784105}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.646667534116804, "get_ui_image": 0.04571303549934836, "step_physics": 0.19608478335773244, "survival_time": 6.749999999999984, "driven_lanedir": 2.593616185382574, "get_state_dump": 0.017178808941560632, "get_robot_state": 0.009993674123988432, "sim_render-ego0": 0.010386293425279506, "get_duckie_state": 0.01412866571370293, "in-drivable-lane": 0.0, "deviation-heading": 1.214770654593495, "agent_compute-ego0": 0.13039535985273473, "complete-iteration": 0.452853204572902, "set_robot_commands": 0.007047332385007073, "deviation-center-line": 0.4257199605729894, "driven_lanedir_consec": 2.593616185382574, "sim_compute_sim_state": 0.015485502341214348, "sim_compute_performance-ego0": 0.006247527459088494}}
set_robot_commands_max0.007047332385007073
set_robot_commands_mean0.006205503334064668
set_robot_commands_median0.006182869760001578
set_robot_commands_min0.005408941431248442
sim_compute_performance-ego0_max0.009026593350349587
sim_compute_performance-ego0_mean0.007182590008963256
sim_compute_performance-ego0_median0.00672811961320747
sim_compute_performance-ego0_min0.006247527459088494
sim_compute_sim_state_max0.026843085521604956
sim_compute_sim_state_mean0.022004192534594656
sim_compute_sim_state_median0.02284409113777966
sim_compute_sim_state_min0.015485502341214348
sim_render-ego0_max0.012120622269650725
sim_render-ego0_mean0.010801938822384494
sim_render-ego0_median0.010399348390764295
sim_render-ego0_min0.01028843623835866
simulation-passed1
step_physics_max0.26032102868912066
step_physics_mean0.220462153124785
step_physics_median0.21953777231308105
step_physics_min0.18245203918385727
survival_time_max6.749999999999984
survival_time_mean4.5999999999999925
survival_time_min2.3
No reset possible
6233113609Andras Beresfsf+ilaido-LFV_multi-sim-validation356successyesreg021:26:11
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median25.500000000000227
in-drivable-lane_median0.1750000000000016
driven_lanedir_consec_median5.365662933258638
deviation-center-line_median1.6044755420254524


other stats
agent_compute-ego0_max0.15334198535134216
agent_compute-ego0_mean0.13865371213281324
agent_compute-ego0_median0.12998782072048598
agent_compute-ego0_min0.127238673708273
agent_compute-ego1_max0.1287445092459448
agent_compute-ego1_mean0.11758881268062071
agent_compute-ego1_median0.1126695533088042
agent_compute-ego1_min0.1089144919148858
complete-iteration_max1.714555342744568
complete-iteration_mean1.440327546104495
complete-iteration_median1.5709364087614297
complete-iteration_min0.7957370503348593
deviation-center-line_max4.685258885331869
deviation-center-line_mean2.0940919445811645
deviation-center-line_min0.46580390692958307
deviation-heading_max14.88277701798551
deviation-heading_mean5.198791364535423
deviation-heading_median3.4507574786886765
deviation-heading_min1.1525089054371125
driven_any_max22.4584202936406
driven_any_mean7.7383225525928925
driven_any_median5.426336045563694
driven_any_min1.008631692321679
driven_lanedir_consec_max21.08988072347855
driven_lanedir_consec_mean7.3657621018325345
driven_lanedir_consec_min0.996662061061377
driven_lanedir_max21.08988072347855
driven_lanedir_mean7.3657621018325345
driven_lanedir_median5.365662933258638
driven_lanedir_min0.996662061061377
get_duckie_state_max3.2557433365375793e-06
get_duckie_state_mean3.074806194187998e-06
get_duckie_state_median3.1251897445472725e-06
get_duckie_state_min2.7040517108040566e-06
get_robot_state_max0.05509491861942443
get_robot_state_mean0.04687981683121468
get_robot_state_median0.05017171148694317
get_robot_state_min0.02428164541351129
get_state_dump_max0.03848780277403683
get_state_dump_mean0.031156533004562396
get_state_dump_median0.03094858989537319
get_state_dump_min0.020449155606098057
get_ui_image_max0.08468200484540232
get_ui_image_mean0.0775713991071053
get_ui_image_median0.08082133729863773
get_ui_image_min0.057040678047985766
in-drivable-lane_max7.0000000000000995
in-drivable-lane_mean1.07142857142858
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-loop-000-ego0": {"driven_any": 1.008631692321679, "get_ui_image": 0.07747621570683559, "step_physics": 0.4975988184932337, "survival_time": 13.80000000000006, "driven_lanedir": 0.996662061061377, "get_state_dump": 0.03848780277403683, "get_robot_state": 0.05509491861942443, "sim_render-ego0": 0.01224734722922425, "sim_render-ego1": 0.011508578427862176, "sim_render-ego2": 0.010813785373949402, "sim_render-ego3": 0.009021975073143031, "get_duckie_state": 3.0288627431711133e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.545648223635081, "agent_compute-ego0": 0.15334198535134216, "agent_compute-ego1": 0.1287445092459448, "agent_compute-ego2": 0.12819374081029788, "agent_compute-ego3": 0.12682458467862237, "complete-iteration": 1.3577861346923057, "set_robot_commands": 0.0099511378938971, "deviation-center-line": 1.660288615147687, "driven_lanedir_consec": 0.996662061061377, "sim_compute_sim_state": 0.04215823011708174, "sim_compute_performance-ego0": 0.007430179024431249, "sim_compute_performance-ego1": 0.006208084120216783, "sim_compute_performance-ego2": 0.006029762515952871, "sim_compute_performance-ego3": 0.005483917380928563}, "LFV_multi-norm-loop-000-ego1": {"driven_any": 3.2709867692247396, "get_ui_image": 0.07747621570683559, "step_physics": 0.4975988184932337, "survival_time": 13.80000000000006, "driven_lanedir": 3.2464012273197382, "get_state_dump": 0.03848780277403683, "get_robot_state": 0.05509491861942443, "sim_render-ego0": 0.01224734722922425, "sim_render-ego1": 0.011508578427862176, "sim_render-ego2": 0.010813785373949402, "sim_render-ego3": 0.009021975073143031, "get_duckie_state": 3.0288627431711133e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.932839420396196, "agent_compute-ego0": 0.15334198535134216, "agent_compute-ego1": 0.1287445092459448, "agent_compute-ego2": 0.12819374081029788, "agent_compute-ego3": 0.12682458467862237, "complete-iteration": 1.3577861346923057, "set_robot_commands": 0.0099511378938971, "deviation-center-line": 0.525396175196958, "driven_lanedir_consec": 3.2464012273197382, "sim_compute_sim_state": 0.04215823011708174, "sim_compute_performance-ego0": 0.007430179024431249, "sim_compute_performance-ego1": 0.006208084120216783, "sim_compute_performance-ego2": 0.006029762515952871, "sim_compute_performance-ego3": 0.005483917380928563}, "LFV_multi-norm-loop-000-ego2": {"driven_any": 1.1193219752650816, "get_ui_image": 0.07747621570683559, "step_physics": 0.4975988184932337, "survival_time": 13.80000000000006, "driven_lanedir": 1.1137738202113483, "get_state_dump": 0.03848780277403683, "get_robot_state": 0.05509491861942443, "sim_render-ego0": 0.01224734722922425, "sim_render-ego1": 0.011508578427862176, "sim_render-ego2": 0.010813785373949402, "sim_render-ego3": 0.009021975073143031, "get_duckie_state": 3.0288627431711133e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.945834407431119, "agent_compute-ego0": 0.15334198535134216, "agent_compute-ego1": 0.1287445092459448, "agent_compute-ego2": 0.12819374081029788, "agent_compute-ego3": 0.12682458467862237, "complete-iteration": 1.3577861346923057, "set_robot_commands": 0.0099511378938971, "deviation-center-line": 0.7126768326024951, "driven_lanedir_consec": 1.1137738202113483, "sim_compute_sim_state": 0.04215823011708174, "sim_compute_performance-ego0": 0.007430179024431249, "sim_compute_performance-ego1": 0.006208084120216783, "sim_compute_performance-ego2": 0.006029762515952871, "sim_compute_performance-ego3": 0.005483917380928563}, "LFV_multi-norm-loop-000-ego3": {"driven_any": 6.720676979656855, "get_ui_image": 0.07747621570683559, "step_physics": 0.4975988184932337, "survival_time": 13.80000000000006, "driven_lanedir": 6.648836904221607, "get_state_dump": 0.03848780277403683, "get_robot_state": 0.05509491861942443, "sim_render-ego0": 0.01224734722922425, "sim_render-ego1": 0.011508578427862176, "sim_render-ego2": 0.010813785373949402, "sim_render-ego3": 0.009021975073143031, "get_duckie_state": 3.0288627431711133e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.6814782631023315, "agent_compute-ego0": 0.15334198535134216, "agent_compute-ego1": 0.1287445092459448, "agent_compute-ego2": 0.12819374081029788, "agent_compute-ego3": 0.12682458467862237, "complete-iteration": 1.3577861346923057, "set_robot_commands": 0.0099511378938971, "deviation-center-line": 0.8396215759798198, "driven_lanedir_consec": 6.648836904221607, "sim_compute_sim_state": 0.04215823011708174, "sim_compute_performance-ego0": 0.007430179024431249, "sim_compute_performance-ego1": 0.006208084120216783, "sim_compute_performance-ego2": 0.006029762515952871, "sim_compute_performance-ego3": 0.005483917380928563}, "LFV_multi-norm-zigzag-000-ego0": {"driven_any": 22.4584202936406, "get_ui_image": 0.08468200484540232, "step_physics": 0.9483486447378854, "survival_time": 48.099999999999405, "driven_lanedir": 21.08988072347855, "get_state_dump": 0.03094858989537319, "get_robot_state": 0.05017171148694317, "sim_render-ego0": 0.010154867221757012, "sim_render-ego1": 0.009164111512719904, "sim_render-ego2": 0.009675650829342916, "sim_render-ego3": 0.008930302607059975, "get_duckie_state": 3.1251897445472725e-06, "in-drivable-lane": 2.599999999999987, "deviation-heading": 7.488414182630852, "agent_compute-ego0": 0.127238673708273, "agent_compute-ego1": 0.1089144919148858, "agent_compute-ego2": 0.1066741616555092, "agent_compute-ego3": 0.10511025461452402, "complete-iteration": 1.714555342744568, "set_robot_commands": 0.007177197920818072, "deviation-center-line": 3.5117593562722984, "driven_lanedir_consec": 21.08988072347855, "sim_compute_sim_state": 0.06619628245709222, "sim_compute_performance-ego0": 0.006809965224776065, "sim_compute_performance-ego1": 0.004584217616207008, "sim_compute_performance-ego2": 0.004622751802422671, "sim_compute_performance-ego3": 0.005011652365156671}, "LFV_multi-norm-zigzag-000-ego1": {"driven_any": 11.600709578828434, "get_ui_image": 0.08468200484540232, "step_physics": 0.9483486447378854, "survival_time": 48.099999999999405, "driven_lanedir": 10.92528890466095, "get_state_dump": 0.03094858989537319, "get_robot_state": 0.05017171148694317, "sim_render-ego0": 0.010154867221757012, "sim_render-ego1": 0.009164111512719904, "sim_render-ego2": 0.009675650829342916, "sim_render-ego3": 0.008930302607059975, "get_duckie_state": 3.1251897445472725e-06, "in-drivable-lane": 1.2500000000000044, "deviation-heading": 14.88277701798551, "agent_compute-ego0": 0.127238673708273, "agent_compute-ego1": 0.1089144919148858, "agent_compute-ego2": 0.1066741616555092, "agent_compute-ego3": 0.10511025461452402, "complete-iteration": 1.714555342744568, "set_robot_commands": 0.007177197920818072, "deviation-center-line": 4.685258885331869, "driven_lanedir_consec": 10.92528890466095, "sim_compute_sim_state": 0.06619628245709222, "sim_compute_performance-ego0": 0.006809965224776065, "sim_compute_performance-ego1": 0.004584217616207008, "sim_compute_performance-ego2": 0.004622751802422671, "sim_compute_performance-ego3": 0.005011652365156671}, "LFV_multi-norm-zigzag-000-ego2": {"driven_any": 14.882625929480804, "get_ui_image": 0.08468200484540232, "step_physics": 0.9483486447378854, "survival_time": 48.099999999999405, "driven_lanedir": 14.082195873383895, "get_state_dump": 0.03094858989537319, "get_robot_state": 0.05017171148694317, "sim_render-ego0": 0.010154867221757012, "sim_render-ego1": 0.009164111512719904, "sim_render-ego2": 0.009675650829342916, "sim_render-ego3": 0.008930302607059975, "get_duckie_state": 3.1251897445472725e-06, "in-drivable-lane": 1.4500000000000108, "deviation-heading": 6.631339641284122, "agent_compute-ego0": 0.127238673708273, "agent_compute-ego1": 0.1089144919148858, "agent_compute-ego2": 0.1066741616555092, "agent_compute-ego3": 0.10511025461452402, "complete-iteration": 1.714555342744568, "set_robot_commands": 0.007177197920818072, "deviation-center-line": 3.3761229140722087, "driven_lanedir_consec": 14.082195873383895, "sim_compute_sim_state": 0.06619628245709222, "sim_compute_performance-ego0": 0.006809965224776065, "sim_compute_performance-ego1": 0.004584217616207008, "sim_compute_performance-ego2": 0.004622751802422671, "sim_compute_performance-ego3": 0.005011652365156671}, "LFV_multi-norm-zigzag-000-ego3": {"driven_any": 11.288432337203725, "get_ui_image": 0.08468200484540232, "step_physics": 0.9483486447378854, "survival_time": 48.099999999999405, "driven_lanedir": 10.909233457179283, "get_state_dump": 0.03094858989537319, "get_robot_state": 0.05017171148694317, "sim_render-ego0": 0.010154867221757012, "sim_render-ego1": 0.009164111512719904, "sim_render-ego2": 0.009675650829342916, "sim_render-ego3": 0.008930302607059975, "get_duckie_state": 3.1251897445472725e-06, "in-drivable-lane": 0.3500000000000032, "deviation-heading": 10.291517953394273, "agent_compute-ego0": 0.127238673708273, "agent_compute-ego1": 0.1089144919148858, "agent_compute-ego2": 0.1066741616555092, "agent_compute-ego3": 0.10511025461452402, "complete-iteration": 1.714555342744568, "set_robot_commands": 0.007177197920818072, "deviation-center-line": 3.973005565851766, "driven_lanedir_consec": 10.909233457179283, "sim_compute_sim_state": 0.06619628245709222, "sim_compute_performance-ego0": 0.006809965224776065, "sim_compute_performance-ego1": 0.004584217616207008, "sim_compute_performance-ego2": 0.004622751802422671, "sim_compute_performance-ego3": 0.005011652365156671}, "LFV_multi-norm-techtrack-000-ego0": {"driven_any": 9.289474282206614, "get_ui_image": 0.08082133729863773, "step_physics": 0.8119632530585661, "survival_time": 25.500000000000227, "driven_lanedir": 8.76615241756709, "get_state_dump": 0.02938689504350935, "get_robot_state": 0.046671906096128105, "sim_render-ego0": 0.00947160310241341, "sim_render-ego1": 0.008437408626429489, "sim_render-ego2": 0.008671174543944357, "sim_render-ego3": 0.008270985926200732, "get_duckie_state": 3.2557433365375793e-06, "in-drivable-lane": 7.0000000000000995, "deviation-heading": 3.355866733742272, "agent_compute-ego0": 0.12998782072048598, "agent_compute-ego1": 0.1126695533088042, "agent_compute-ego2": 0.1099833387684682, "agent_compute-ego3": 0.11375725385960768, "complete-iteration": 1.5709364087614297, "set_robot_commands": 0.006132957286797391, "deviation-center-line": 1.1572362815813495, "driven_lanedir_consec": 8.76615241756709, "sim_compute_sim_state": 0.05448978287833078, "sim_compute_performance-ego0": 0.006595234115063095, "sim_compute_performance-ego1": 0.004493902807366358, "sim_compute_performance-ego2": 0.00445290507635725, "sim_compute_performance-ego3": 0.00446540642157926}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 3.634422435690906, "get_ui_image": 0.08082133729863773, "step_physics": 0.8119632530585661, "survival_time": 25.500000000000227, "driven_lanedir": 3.060634333986818, "get_state_dump": 0.02938689504350935, "get_robot_state": 0.046671906096128105, "sim_render-ego0": 0.00947160310241341, "sim_render-ego1": 0.008437408626429489, "sim_render-ego2": 0.008671174543944357, "sim_render-ego3": 0.008270985926200732, "get_duckie_state": 3.2557433365375793e-06, "in-drivable-lane": 1.3000000000000007, "deviation-heading": 9.103409844428269, "agent_compute-ego0": 0.12998782072048598, "agent_compute-ego1": 0.1126695533088042, "agent_compute-ego2": 0.1099833387684682, "agent_compute-ego3": 0.11375725385960768, "complete-iteration": 1.5709364087614297, "set_robot_commands": 0.006132957286797391, "deviation-center-line": 3.5437718854438516, "driven_lanedir_consec": 3.060634333986818, "sim_compute_sim_state": 0.05448978287833078, "sim_compute_performance-ego0": 0.006595234115063095, "sim_compute_performance-ego1": 0.004493902807366358, "sim_compute_performance-ego2": 0.00445290507635725, "sim_compute_performance-ego3": 0.00446540642157926}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 4.131995111470532, "get_ui_image": 0.08082133729863773, "step_physics": 0.8119632530585661, "survival_time": 25.500000000000227, "driven_lanedir": 4.082488962295669, "get_state_dump": 0.02938689504350935, "get_robot_state": 0.046671906096128105, "sim_render-ego0": 0.00947160310241341, "sim_render-ego1": 0.008437408626429489, "sim_render-ego2": 0.008671174543944357, "sim_render-ego3": 0.008270985926200732, "get_duckie_state": 3.2557433365375793e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.200506294096474, "agent_compute-ego0": 0.12998782072048598, "agent_compute-ego1": 0.1126695533088042, "agent_compute-ego2": 0.1099833387684682, "agent_compute-ego3": 0.11375725385960768, "complete-iteration": 1.5709364087614297, "set_robot_commands": 0.006132957286797391, "deviation-center-line": 2.793224769868812, "driven_lanedir_consec": 4.082488962295669, "sim_compute_sim_state": 0.05448978287833078, "sim_compute_performance-ego0": 0.006595234115063095, "sim_compute_performance-ego1": 0.004493902807366358, "sim_compute_performance-ego2": 0.00445290507635725, "sim_compute_performance-ego3": 0.00446540642157926}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 12.166247135608948, "get_ui_image": 0.08082133729863773, "step_physics": 0.8119632530585661, "survival_time": 25.500000000000227, "driven_lanedir": 11.573623696993607, "get_state_dump": 0.02938689504350935, "get_robot_state": 0.046671906096128105, "sim_render-ego0": 0.00947160310241341, "sim_render-ego1": 0.008437408626429489, "sim_render-ego2": 0.008671174543944357, "sim_render-ego3": 0.008270985926200732, "get_duckie_state": 3.2557433365375793e-06, "in-drivable-lane": 1.050000000000015, "deviation-heading": 3.939493375876067, "agent_compute-ego0": 0.12998782072048598, "agent_compute-ego1": 0.1126695533088042, "agent_compute-ego2": 0.1099833387684682, "agent_compute-ego3": 0.11375725385960768, "complete-iteration": 1.5709364087614297, "set_robot_commands": 0.006132957286797391, "deviation-center-line": 1.548662468903218, "driven_lanedir_consec": 11.573623696993607, "sim_compute_sim_state": 0.05448978287833078, "sim_compute_performance-ego0": 0.006595234115063095, "sim_compute_performance-ego1": 0.004493902807366358, "sim_compute_performance-ego2": 0.00445290507635725, "sim_compute_performance-ego3": 0.00446540642157926}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 3.542051695809606, "get_ui_image": 0.057040678047985766, "step_physics": 0.3574633820456748, "survival_time": 7.99999999999998, "driven_lanedir": 3.485589746288337, "get_state_dump": 0.020449155606098057, "get_robot_state": 0.02428164541351129, "sim_render-ego0": 0.009974059110842875, "sim_render-ego1": 0.008409778523889388, "get_duckie_state": 2.7040517108040566e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.1525089054371125, "agent_compute-ego0": 0.14943902536949016, "agent_compute-ego1": 0.12246457982507553, "complete-iteration": 0.7957370503348593, "set_robot_commands": 0.006444662994479541, "deviation-center-line": 0.5244579909543857, "driven_lanedir_consec": 3.485589746288337, "sim_compute_sim_state": 0.024026445720506752, "sim_compute_performance-ego0": 0.005993511365807574, "sim_compute_performance-ego1": 0.0041172074975434295}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 3.222519519891979, "get_ui_image": 0.057040678047985766, "step_physics": 0.3574633820456748, "survival_time": 7.99999999999998, "driven_lanedir": 3.1399072970072126, "get_state_dump": 0.020449155606098057, "get_robot_state": 0.02428164541351129, "sim_render-ego0": 0.009974059110842875, "sim_render-ego1": 0.008409778523889388, "get_duckie_state": 2.7040517108040566e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.6314448400562316, "agent_compute-ego0": 0.14943902536949016, "agent_compute-ego1": 0.12246457982507553, "complete-iteration": 0.7957370503348593, "set_robot_commands": 0.006444662994479541, "deviation-center-line": 0.46580390692958307, "driven_lanedir_consec": 3.1399072970072126, "sim_compute_sim_state": 0.024026445720506752, "sim_compute_performance-ego0": 0.005993511365807574, "sim_compute_performance-ego1": 0.0041172074975434295}}
set_robot_commands_max0.0099511378938971
set_robot_commands_mean0.007566749885357808
set_robot_commands_median0.007177197920818072
set_robot_commands_min0.006132957286797391
sim_compute_performance-ego0_max0.007430179024431249
sim_compute_performance-ego0_mean0.006809181156335483
sim_compute_performance-ego0_median0.006809965224776065
sim_compute_performance-ego0_min0.005993511365807574
sim_compute_performance-ego1_max0.006208084120216783
sim_compute_performance-ego1_mean0.004955659512160532
sim_compute_performance-ego1_median0.004584217616207008
sim_compute_performance-ego1_min0.0041172074975434295
sim_compute_sim_state_max0.06619628245709222
sim_compute_sim_state_mean0.049959290946502304
sim_compute_sim_state_median0.05448978287833078
sim_compute_sim_state_min0.024026445720506752
sim_render-ego0_max0.01224734722922425
sim_render-ego0_mean0.010531670602518887
sim_render-ego0_median0.010154867221757012
sim_render-ego0_min0.00947160310241341
sim_render-ego1_max0.011508578427862176
sim_render-ego1_mean0.00951856795113036
sim_render-ego1_median0.009164111512719904
sim_render-ego1_min0.008409778523889388
simulation-passed1
step_physics_max0.9483486447378854
step_physics_mean0.6961835449464351
step_physics_median0.8119632530585661
step_physics_min0.3574633820456748
survival_time_max48.099999999999405
survival_time_mean26.114285714285625
survival_time_min7.99999999999998
No reset possible
6233013610Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-testing348failedyesreg020:03:27
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6232913610Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-testing348failedyesreg020:03:38
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6232213640Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LF-sim-testing348successyesreg021:15:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median25.42065715360476
survival_time_median59.99999999999873
deviation-center-line_median4.030865820526193
in-drivable-lane_median2.199999999999901


other stats
agent_compute-ego0_max0.2274537245300191
agent_compute-ego0_mean0.2238056463166935
agent_compute-ego0_median0.22445987672432577
agent_compute-ego0_min0.2188491072881033
complete-iteration_max0.6510675483500332
complete-iteration_mean0.5930479593221393
complete-iteration_median0.595431553434075
complete-iteration_min0.530261182070374
deviation-center-line_max4.389772966644217
deviation-center-line_mean3.972758283342981
deviation-center-line_min3.4395285256753194
deviation-heading_max14.762193489667789
deviation-heading_mean11.289516678578218
deviation-heading_median10.52002041827193
deviation-heading_min9.355832388101229
driven_any_max27.544906394931143
driven_any_mean27.254343619776115
driven_any_median27.26781283735796
driven_any_min26.936842409457405
driven_lanedir_consec_max26.234796605727627
driven_lanedir_consec_mean25.3347827470533
driven_lanedir_consec_min24.263020075276057
driven_lanedir_max26.234796605727627
driven_lanedir_mean25.3347827470533
driven_lanedir_median25.42065715360476
driven_lanedir_min24.263020075276057
get_duckie_state_max3.0162630232049464e-06
get_duckie_state_mean2.95864354561608e-06
get_duckie_state_median2.955020615500673e-06
get_duckie_state_min2.9082699282580273e-06
get_robot_state_max0.01260021266889612
get_robot_state_mean0.011812998392897582
get_robot_state_median0.011556497025152329
get_robot_state_min0.01153878685238955
get_state_dump_max0.01875158471926166
get_state_dump_mean0.017680763999786502
get_state_dump_median0.018633766932650273
get_state_dump_min0.014703937414583814
get_ui_image_max0.06339150562969274
get_ui_image_mean0.05470833214593866
get_ui_image_median0.05492527022349844
get_ui_image_min0.04559128250706503
in-drivable-lane_max2.8999999999998813
in-drivable-lane_mean2.2249999999999117
in-drivable-lane_min1.5999999999999632
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.169510682266186, "get_ui_image": 0.05243357373316222, "step_physics": 0.22241846171148016, "survival_time": 59.99999999999873, "driven_lanedir": 25.79274927781033, "get_state_dump": 0.01856255987899488, "get_robot_state": 0.01153878685238955, "sim_render-ego0": 0.010632417084870983, "get_duckie_state": 2.9082699282580273e-06, "in-drivable-lane": 1.7499999999999138, "deviation-heading": 9.355832388101229, "agent_compute-ego0": 0.2274537245300191, "complete-iteration": 0.5812515156354435, "set_robot_commands": 0.007976325525034477, "deviation-center-line": 3.4395285256753194, "driven_lanedir_consec": 25.79274927781033, "sim_compute_sim_state": 0.023387640937976698, "sim_compute_performance-ego0": 0.006654767370740142}, "LF-norm-zigzag-000-ego0": {"driven_any": 26.936842409457405, "get_ui_image": 0.06339150562969274, "step_physics": 0.27728976020209495, "survival_time": 59.99999999999873, "driven_lanedir": 24.263020075276057, "get_state_dump": 0.014703937414583814, "get_robot_state": 0.011540661445763784, "sim_render-ego0": 0.010867418992727822, "get_duckie_state": 3.0162630232049464e-06, "in-drivable-lane": 2.6499999999998884, "deviation-heading": 14.762193489667789, "agent_compute-ego0": 0.22735392580818475, "complete-iteration": 0.6510675483500332, "set_robot_commands": 0.008303713937484651, "deviation-center-line": 4.389772966644217, "driven_lanedir_consec": 24.263020075276057, "sim_compute_sim_state": 0.03094216250658631, "sim_compute_performance-ego0": 0.006477039719898436}, "LF-norm-techtrack-000-ego0": {"driven_any": 27.36611499244974, "get_ui_image": 0.057416966713834665, "step_physics": 0.2471878457128952, "survival_time": 59.99999999999873, "driven_lanedir": 25.04856502939919, "get_state_dump": 0.018704973986305665, "get_robot_state": 0.011572332604540872, "sim_render-ego0": 0.010049033026016323, "get_duckie_state": 2.9116447124751183e-06, "in-drivable-lane": 2.8999999999998813, "deviation-heading": 11.618042535466229, "agent_compute-ego0": 0.2188491072881033, "complete-iteration": 0.6096115912327064, "set_robot_commands": 0.008839709474879637, "deviation-center-line": 3.743252930625733, "driven_lanedir_consec": 25.04856502939919, "sim_compute_sim_state": 0.030057407834944774, "sim_compute_performance-ego0": 0.00673728858700005}, "LF-norm-small_loop-000-ego0": {"driven_any": 27.544906394931143, "get_ui_image": 0.04559128250706503, "step_physics": 0.1875074691915393, "survival_time": 59.99999999999873, "driven_lanedir": 26.234796605727627, "get_state_dump": 0.01875158471926166, "get_robot_state": 0.01260021266889612, "sim_render-ego0": 0.011050880203437646, "get_duckie_state": 2.9983965185262284e-06, "in-drivable-lane": 1.5999999999999632, "deviation-heading": 9.42199830107763, "agent_compute-ego0": 0.22156582764046676, "complete-iteration": 0.530261182070374, "set_robot_commands": 0.00875797041449122, "deviation-center-line": 4.318478710426653, "driven_lanedir_consec": 26.234796605727627, "sim_compute_sim_state": 0.01726508001602262, "sim_compute_performance-ego0": 0.006970647769010037}}
set_robot_commands_max0.008839709474879637
set_robot_commands_mean0.008469429837972497
set_robot_commands_median0.008530842175987936
set_robot_commands_min0.007976325525034477
sim_compute_performance-ego0_max0.006970647769010037
sim_compute_performance-ego0_mean0.006709935861662166
sim_compute_performance-ego0_median0.006696027978870096
sim_compute_performance-ego0_min0.006477039719898436
sim_compute_sim_state_max0.03094216250658631
sim_compute_sim_state_mean0.0254130728238826
sim_compute_sim_state_median0.026722524386460736
sim_compute_sim_state_min0.01726508001602262
sim_render-ego0_max0.011050880203437646
sim_render-ego0_mean0.010649937326763196
sim_render-ego0_median0.010749918038799404
sim_render-ego0_min0.010049033026016323
simulation-passed1
step_physics_max0.27728976020209495
step_physics_mean0.2336008842045024
step_physics_median0.23480315371218768
step_physics_min0.1875074691915393
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6232113647Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LFP-sim-validation350successyesreg020:09:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.874999999999991
in-drivable-lane_median0.0
driven_lanedir_consec_median1.66437979919727
deviation-center-line_median0.2931924266881385


other stats
agent_compute-ego0_max0.0488516631191724
agent_compute-ego0_mean0.041232997432429225
agent_compute-ego0_median0.039670541497743184
agent_compute-ego0_min0.03673924361505816
complete-iteration_max0.5544315592883384
complete-iteration_mean0.4839245576156004
complete-iteration_median0.5060362067650741
complete-iteration_min0.369194257643915
deviation-center-line_max0.6893377922523443
deviation-center-line_mean0.3653825516527035
deviation-center-line_min0.1858075609821929
deviation-heading_max2.72597870522281
deviation-heading_mean1.2702278649970735
deviation-heading_median0.97078389913569
deviation-heading_min0.4133649564941046
driven_any_max4.216492410954944
driven_any_mean2.0491563007627427
driven_any_median1.7162703843341198
driven_any_min0.5475920234277866
driven_lanedir_consec_max3.5944696527910818
driven_lanedir_consec_mean1.865023113074084
driven_lanedir_consec_min0.5368632011107151
driven_lanedir_max3.5944696527910818
driven_lanedir_mean1.865023113074084
driven_lanedir_median1.66437979919727
driven_lanedir_min0.5368632011107151
get_duckie_state_max0.08594009350163276
get_duckie_state_mean0.06287323367036214
get_duckie_state_median0.0765039940539016
get_duckie_state_min0.012544853072012623
get_robot_state_max0.014960919340995893
get_robot_state_mean0.012081888710926894
get_robot_state_median0.01151749743090856
get_robot_state_min0.010331640640894571
get_state_dump_max0.041794117182901463
get_state_dump_mean0.027089689641729944
get_state_dump_median0.024563873352858943
get_state_dump_min0.017436894678300428
get_ui_image_max0.07032173627043424
get_ui_image_mean0.05850019206511708
get_ui_image_median0.05856698146308532
get_ui_image_min0.04654506906386344
in-drivable-lane_max0.9500000000000064
in-drivable-lane_mean0.2375000000000016
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 4.216492410954944, "get_ui_image": 0.05495449187050403, "step_physics": 0.2525896589520951, "survival_time": 10.600000000000016, "driven_lanedir": 3.5944696527910818, "get_state_dump": 0.025324954673158172, "get_robot_state": 0.011068870204155434, "sim_render-ego0": 0.010351741817635547, "get_duckie_state": 0.08594009350163276, "in-drivable-lane": 0.9500000000000064, "deviation-heading": 2.72597870522281, "agent_compute-ego0": 0.03860599222317548, "complete-iteration": 0.5133142415346674, "set_robot_commands": 0.007559954280584631, "deviation-center-line": 0.6893377922523443, "driven_lanedir_consec": 3.5944696527910818, "sim_compute_sim_state": 0.02066239952481409, "sim_compute_performance-ego0": 0.006044525495717224}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5475920234277866, "get_ui_image": 0.0621794710556666, "step_physics": 0.24425501128037772, "survival_time": 2.3499999999999996, "driven_lanedir": 0.5368632011107151, "get_state_dump": 0.02380279203255971, "get_robot_state": 0.010331640640894571, "sim_render-ego0": 0.008790045976638794, "get_duckie_state": 0.07526664932568868, "in-drivable-lane": 0.0, "deviation-heading": 0.4133649564941046, "agent_compute-ego0": 0.04073509077231089, "complete-iteration": 0.4987581719954808, "set_robot_commands": 0.005192289749781291, "deviation-center-line": 0.2479913532783351, "driven_lanedir_consec": 0.5368632011107151, "sim_compute_sim_state": 0.022442201773325603, "sim_compute_performance-ego0": 0.005547339717547099}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.171515563374533, "get_ui_image": 0.07032173627043424, "step_physics": 0.24215390257639427, "survival_time": 3.599999999999995, "driven_lanedir": 1.1357577795739768, "get_state_dump": 0.041794117182901463, "get_robot_state": 0.014960919340995893, "sim_render-ego0": 0.007884058233809797, "get_duckie_state": 0.07774133878211452, "in-drivable-lane": 0.0, "deviation-heading": 0.8182272241761924, "agent_compute-ego0": 0.0488516631191724, "complete-iteration": 0.5544315592883384, "set_robot_commands": 0.009155838456872392, "deviation-center-line": 0.1858075609821929, "driven_lanedir_consec": 1.1357577795739768, "sim_compute_sim_state": 0.03498439592857883, "sim_compute_performance-ego0": 0.006365407003115301}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.2610252052937065, "get_ui_image": 0.04654506906386344, "step_physics": 0.20015890559842509, "survival_time": 6.149999999999986, "driven_lanedir": 2.193001818820563, "get_state_dump": 0.017436894678300428, "get_robot_state": 0.011966124657661684, "sim_render-ego0": 0.013740462641562184, "get_duckie_state": 0.012544853072012623, "in-drivable-lane": 0.0, "deviation-heading": 1.1233405740951876, "agent_compute-ego0": 0.03673924361505816, "complete-iteration": 0.369194257643915, "set_robot_commands": 0.007157268062714607, "deviation-center-line": 0.3383935000979419, "driven_lanedir_consec": 2.193001818820563, "sim_compute_sim_state": 0.016260902727803877, "sim_compute_performance-ego0": 0.00643295818759549}}
set_robot_commands_max0.009155838456872392
set_robot_commands_mean0.00726633763748823
set_robot_commands_median0.007358611171649619
set_robot_commands_min0.005192289749781291
sim_compute_performance-ego0_max0.00643295818759549
sim_compute_performance-ego0_mean0.006097557600993779
sim_compute_performance-ego0_median0.006204966249416263
sim_compute_performance-ego0_min0.005547339717547099
sim_compute_sim_state_max0.03498439592857883
sim_compute_sim_state_mean0.0235874749886306
sim_compute_sim_state_median0.021552300649069843
sim_compute_sim_state_min0.016260902727803877
sim_render-ego0_max0.013740462641562184
sim_render-ego0_mean0.01019157716741158
sim_render-ego0_median0.00957089389713717
sim_render-ego0_min0.007884058233809797
simulation-passed1
step_physics_max0.2525896589520951
step_physics_mean0.23478936960182303
step_physics_median0.243204456928386
step_physics_min0.20015890559842509
survival_time_max10.600000000000016
survival_time_mean5.674999999999999
survival_time_min2.3499999999999996
No reset possible
6231913617Raphael Jeanmobile-segmentation-pedestrianaido-LFP-sim-validation350failedyesreg020:04:01
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6230213691Caleb BG 🇺🇸baseline-duckietownaido-LF-sim-validation347successyesreg020:47:37
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.0
survival_time_median59.99999999999873
deviation-center-line_median1.2422730096440104
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.04157347464740128
agent_compute-ego0_mean0.0403371485544184
agent_compute-ego0_median0.04089011746977489
agent_compute-ego0_min0.03799488463072256
complete-iteration_max0.2997927548585585
complete-iteration_mean0.2852930716432004
complete-iteration_median0.28552629250868666
complete-iteration_min0.27032694669686985
deviation-center-line_max4.053503393024394
deviation-center-line_mean1.731069028434027
deviation-center-line_min0.386226701423694
deviation-heading_max27.859809596736422
deviation-heading_mean14.925250437835436
deviation-heading_median14.309269950178932
deviation-heading_min3.22265225424745
driven_any_max2.6645352591003757e-13
driven_any_mean1.9984014443252818e-13
driven_any_median2.6645352591003757e-13
driven_any_min0.0
driven_lanedir_consec_max0.000286102294921875
driven_lanedir_consec_mean7.152557373046875e-05
driven_lanedir_consec_min0.0
driven_lanedir_max0.000286102294921875
driven_lanedir_mean7.152557373046875e-05
driven_lanedir_median0.0
driven_lanedir_min0.0
get_duckie_state_max3.1736867810963195e-06
get_duckie_state_mean3.0815750236415944e-06
get_duckie_state_median3.157507668526147e-06
get_duckie_state_min2.837597976417764e-06
get_robot_state_max0.014137498345799889
get_robot_state_mean0.01334481106312646
get_robot_state_median0.013314274328138111
get_robot_state_min0.012613197250429736
get_state_dump_max0.02289710592766983
get_state_dump_mean0.01971719564744376
get_state_dump_median0.01923110095984136
get_state_dump_min0.017509474742422492
get_ui_image_max0.06284125837854898
get_ui_image_mean0.05557011098885515
get_ui_image_median0.05493704663228234
get_ui_image_min0.049565092312306984
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.04980510120884167, "step_physics": 0.10121412201785326, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.01810237132540154, "get_robot_state": 0.013223467619591808, "sim_render-ego0": 0.0110370247290593, "get_duckie_state": 3.1736867810963195e-06, "in-drivable-lane": 0.0, "deviation-heading": 22.66279310353771, "agent_compute-ego0": 0.03799488463072256, "complete-iteration": 0.27032694669686985, "set_robot_commands": 0.008345062190745892, "deviation-center-line": 4.053503393024394, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.023663578581353407, "sim_compute_performance-ego0": 0.0067408059062211345}, "LF-norm-zigzag-000-ego0": {"driven_any": 0.0, "get_ui_image": 0.06284125837854898, "step_physics": 0.10971719617152788, "survival_time": 59.99999999999873, "driven_lanedir": 0.000286102294921875, "get_state_dump": 0.017509474742422492, "get_robot_state": 0.012613197250429736, "sim_render-ego0": 0.011289922919102652, "get_duckie_state": 3.1675327628180943e-06, "in-drivable-lane": 0.0, "deviation-heading": 27.859809596736422, "agent_compute-ego0": 0.041223901396091535, "complete-iteration": 0.2997927548585585, "set_robot_commands": 0.008364552760699905, "deviation-center-line": 1.0457540566888746, "driven_lanedir_consec": 0.000286102294921875, "sim_compute_sim_state": 0.02927589436355578, "sim_compute_performance-ego0": 0.006760102525341025}, "LF-norm-techtrack-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.06006899205572301, "step_physics": 0.11124242076667322, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.020359830594281175, "get_robot_state": 0.013405081036684415, "sim_render-ego0": 0.010465397227316674, "get_duckie_state": 2.837597976417764e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.22265225424745, "agent_compute-ego0": 0.04157347464740128, "complete-iteration": 0.297173440903053, "set_robot_commands": 0.007973548474657248, "deviation-center-line": 1.4387919625991463, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.02539472476727361, "sim_compute_performance-ego0": 0.006502065531518636}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.6645352591003757e-13, "get_ui_image": 0.049565092312306984, "step_physics": 0.10273182818931306, "survival_time": 59.99999999999873, "driven_lanedir": 0.0, "get_state_dump": 0.02289710592766983, "get_robot_state": 0.014137498345799889, "sim_render-ego0": 0.011053168704170273, "get_duckie_state": 3.1474825742341996e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.955746796820156, "agent_compute-ego0": 0.04055633354345825, "complete-iteration": 0.2738791441143204, "set_robot_commands": 0.008803255651316774, "deviation-center-line": 0.386226701423694, "driven_lanedir_consec": 0.0, "sim_compute_sim_state": 0.017139079271010018, "sim_compute_performance-ego0": 0.006802670465321664}}
set_robot_commands_max0.008803255651316774
set_robot_commands_mean0.008371604769354954
set_robot_commands_median0.008354807475722897
set_robot_commands_min0.007973548474657248
sim_compute_performance-ego0_max0.006802670465321664
sim_compute_performance-ego0_mean0.006701411107100615
sim_compute_performance-ego0_median0.00675045421578108
sim_compute_performance-ego0_min0.006502065531518636
sim_compute_sim_state_max0.02927589436355578
sim_compute_sim_state_mean0.0238683192457982
sim_compute_sim_state_median0.024529151674313507
sim_compute_sim_state_min0.017139079271010018
sim_render-ego0_max0.011289922919102652
sim_render-ego0_mean0.010961378394912224
sim_render-ego0_median0.011045096716614786
sim_render-ego0_min0.010465397227316674
simulation-passed1
step_physics_max0.11124242076667322
step_physics_mean0.10622639178634186
step_physics_median0.10622451218042048
step_physics_min0.10121412201785326
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6229913694Samuel Alexandertemplate-tensorflowaido-LF-sim-validation347successyesreg020:10:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.399735831262527
survival_time_median5.699999999999988
deviation-center-line_median0.1326287550235055
in-drivable-lane_median4.04999999999999


other stats
agent_compute-ego0_max0.4070396488363093
agent_compute-ego0_mean0.12311127220208828
agent_compute-ego0_median0.029169455587447105
agent_compute-ego0_min0.027066528797149655
complete-iteration_max0.8291410272771662
complete-iteration_mean0.4774626049816192
complete-iteration_median0.3884982683983717
complete-iteration_min0.30371285585256724
deviation-center-line_max0.1607014721617163
deviation-center-line_mean0.11802894242427787
deviation-center-line_min0.04615678748838428
deviation-heading_max1.570262276455506
deviation-heading_mean0.9596984587337092
deviation-heading_median0.9758940962111282
deviation-heading_min0.31674336605707476
driven_any_max3.851436114382767
driven_any_mean2.3012055677881875
driven_any_median2.1300739843964096
driven_any_min1.0932381879771642
driven_lanedir_consec_max0.48340597962598864
driven_lanedir_consec_mean0.35618583073384297
driven_lanedir_consec_min0.14186568078432926
driven_lanedir_max0.48340597962598864
driven_lanedir_mean0.35618583073384297
driven_lanedir_median0.399735831262527
driven_lanedir_min0.14186568078432926
get_duckie_state_max4.078041423450817e-06
get_duckie_state_mean4.011392593383789e-06
get_duckie_state_median4.041194915771485e-06
get_duckie_state_min3.885139118541371e-06
get_robot_state_max0.014717634157700972
get_robot_state_mean0.011676803397965596
get_robot_state_median0.011066542221949652
get_robot_state_min0.009856494990262116
get_state_dump_max0.021836330673911355
get_state_dump_mean0.017460858696824188
get_state_dump_median0.01743816274863023
get_state_dump_min0.013130778616124932
get_ui_image_max0.06395623467185281
get_ui_image_mean0.05567961981038114
get_ui_image_median0.05625819634307515
get_ui_image_min0.04624585188352145
in-drivable-lane_max10.10000000000002
in-drivable-lane_mean5.049999999999999
in-drivable-lane_min1.9999999999999971
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 1.8661719467483455, "get_ui_image": 0.0511736249923706, "step_physics": 0.16717329502105713, "survival_time": 4.94999999999999, "driven_lanedir": 0.3990879078270344, "get_state_dump": 0.01866492033004761, "get_robot_state": 0.011564040184020996, "sim_render-ego0": 0.012756738662719726, "get_duckie_state": 4.00543212890625e-06, "in-drivable-lane": 3.3999999999999932, "deviation-heading": 0.932010914136802, "agent_compute-ego0": 0.027066528797149655, "complete-iteration": 0.32418944358825685, "set_robot_commands": 0.006422379016876221, "deviation-center-line": 0.1607014721617163, "driven_lanedir_consec": 0.3990879078270344, "sim_compute_sim_state": 0.02242551565170288, "sim_compute_performance-ego0": 0.006738615036010742}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.0932381879771642, "get_ui_image": 0.061342767693779686, "step_physics": 0.29197645458308136, "survival_time": 4.3499999999999925, "driven_lanedir": 0.48340597962598864, "get_state_dump": 0.013130778616124932, "get_robot_state": 0.009856494990262116, "sim_render-ego0": 0.008198123086582531, "get_duckie_state": 3.885139118541371e-06, "in-drivable-lane": 1.9999999999999971, "deviation-heading": 1.570262276455506, "agent_compute-ego0": 0.029817437583749943, "complete-iteration": 0.4528070932084864, "set_robot_commands": 0.00746132027019154, "deviation-center-line": 0.14628707747786857, "driven_lanedir_consec": 0.48340597962598864, "sim_compute_sim_state": 0.02471564303744923, "sim_compute_performance-ego0": 0.006101651625199752}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.851436114382767, "get_ui_image": 0.06395623467185281, "step_physics": 0.2683737592263655, "survival_time": 10.95000000000002, "driven_lanedir": 0.14186568078432926, "get_state_dump": 0.021836330673911355, "get_robot_state": 0.014717634157700972, "sim_render-ego0": 0.012746437029405071, "get_duckie_state": 4.078041423450817e-06, "in-drivable-lane": 10.10000000000002, "deviation-heading": 0.31674336605707476, "agent_compute-ego0": 0.4070396488363093, "complete-iteration": 0.8291410272771662, "set_robot_commands": 0.007018132643266158, "deviation-center-line": 0.04615678748838428, "driven_lanedir_consec": 0.14186568078432926, "sim_compute_sim_state": 0.026689469814300537, "sim_compute_performance-ego0": 0.0065488100051879885}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.3939760220444732, "get_ui_image": 0.04624585188352145, "step_physics": 0.16439942580003006, "survival_time": 6.449999999999985, "driven_lanedir": 0.4003837546980196, "get_state_dump": 0.016211405167212853, "get_robot_state": 0.010569044259878306, "sim_render-ego0": 0.009111925271841195, "get_duckie_state": 4.076957702636719e-06, "in-drivable-lane": 4.699999999999988, "deviation-heading": 1.0197772782854544, "agent_compute-ego0": 0.028521473591144268, "complete-iteration": 0.30371285585256724, "set_robot_commands": 0.007876416353078989, "deviation-center-line": 0.1189704325691424, "driven_lanedir_consec": 0.4003837546980196, "sim_compute_sim_state": 0.014449530381422776, "sim_compute_performance-ego0": 0.006121327326847957}}
set_robot_commands_max0.007876416353078989
set_robot_commands_mean0.007194562070853227
set_robot_commands_median0.007239726456728849
set_robot_commands_min0.006422379016876221
sim_compute_performance-ego0_max0.006738615036010742
sim_compute_performance-ego0_mean0.00637760099831161
sim_compute_performance-ego0_median0.006335068666017973
sim_compute_performance-ego0_min0.006101651625199752
sim_compute_sim_state_max0.026689469814300537
sim_compute_sim_state_mean0.022070039721218855
sim_compute_sim_state_median0.023570579344576056
sim_compute_sim_state_min0.014449530381422776
sim_render-ego0_max0.012756738662719726
sim_render-ego0_mean0.010703306012637132
sim_render-ego0_median0.010929181150623137
sim_render-ego0_min0.008198123086582531
simulation-passed1
step_physics_max0.29197645458308136
step_physics_mean0.22298073365763357
step_physics_median0.21777352712371137
step_physics_min0.16439942580003006
survival_time_max10.95000000000002
survival_time_mean6.674999999999997
survival_time_min4.3499999999999925
No reset possible
6229813696Samuel Alexandertemplate-tensorflowaido-LF-sim-validation347successyesreg020:08:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.362616513347588
survival_time_median4.974999999999991
deviation-center-line_median0.12969313506655755
in-drivable-lane_median3.299999999999993


other stats
agent_compute-ego0_max0.8574547408378288
agent_compute-ego0_mean0.23754021153624752
agent_compute-ego0_median0.03139166737229415
agent_compute-ego0_min0.029922770562573017
complete-iteration_max1.191720283194764
complete-iteration_mean0.5724377809270342
complete-iteration_median0.3905587814842495
complete-iteration_min0.3169132775448738
deviation-center-line_max0.15327191211280158
deviation-center-line_mean0.11446879369200195
deviation-center-line_min0.04521699252209115
deviation-heading_max1.4057096617083424
deviation-heading_mean0.9034562215616172
deviation-heading_median0.9527757282234824
deviation-heading_min0.302563768091162
driven_any_max2.5127280143679083
driven_any_mean1.696515059421674
driven_any_median1.6050395998681983
driven_any_min1.0632530235823918
driven_lanedir_consec_max0.42756338037792774
driven_lanedir_consec_mean0.32519473648403996
driven_lanedir_consec_min0.147982538863056
driven_lanedir_max0.42756338037792774
driven_lanedir_mean0.32519473648403996
driven_lanedir_median0.362616513347588
driven_lanedir_min0.147982538863056
get_duckie_state_max4.010779835353388e-06
get_duckie_state_mean3.813152011769342e-06
get_duckie_state_median3.837673774499619e-06
get_duckie_state_min3.566480662724743e-06
get_robot_state_max0.01136388511301201
get_robot_state_mean0.010185689018083232
get_robot_state_median0.009987478168944693
get_robot_state_min0.009403914621431534
get_state_dump_max0.02421845933970283
get_state_dump_mean0.020225434254840195
get_state_dump_median0.020721163776243723
get_state_dump_min0.01524095012717051
get_ui_image_max0.06609065510402216
get_ui_image_mean0.05582826137503257
get_ui_image_median0.052474751228002846
get_ui_image_min0.05227288794010244
in-drivable-lane_max4.949999999999987
in-drivable-lane_mean3.574999999999992
in-drivable-lane_min2.749999999999995
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 1.7212581220165153, "get_ui_image": 0.05227288794010244, "step_physics": 0.16181825069670985, "survival_time": 4.6499999999999915, "driven_lanedir": 0.38662562014949065, "get_state_dump": 0.018731796995122382, "get_robot_state": 0.009417779902194409, "sim_render-ego0": 0.010752518126305111, "get_duckie_state": 3.80455179417387e-06, "in-drivable-lane": 3.149999999999994, "deviation-heading": 0.8885233490686405, "agent_compute-ego0": 0.030854925196221537, "complete-iteration": 0.3169132775448738, "set_robot_commands": 0.006615585469185037, "deviation-center-line": 0.15327191211280158, "driven_lanedir_consec": 0.38662562014949065, "sim_compute_sim_state": 0.02076632672167839, "sim_compute_performance-ego0": 0.005496788532175916}, "LF-norm-zigzag-000-ego0": {"driven_any": 1.4888210777198814, "get_ui_image": 0.06609065510402216, "step_physics": 0.276834594869168, "survival_time": 5.299999999999989, "driven_lanedir": 0.33860740654568544, "get_state_dump": 0.02271053055736506, "get_robot_state": 0.01136388511301201, "sim_render-ego0": 0.01031698467575501, "get_duckie_state": 4.010779835353388e-06, "in-drivable-lane": 3.449999999999992, "deviation-heading": 1.4057096617083424, "agent_compute-ego0": 0.029922770562573017, "complete-iteration": 0.456195811244929, "set_robot_commands": 0.0067776630972033345, "deviation-center-line": 0.1375699127153115, "driven_lanedir_consec": 0.33860740654568544, "sim_compute_sim_state": 0.024953216035789417, "sim_compute_performance-ego0": 0.007030560591510523}, "LF-norm-techtrack-000-ego0": {"driven_any": 1.0632530235823918, "get_ui_image": 0.05260851285228991, "step_physics": 0.21579111765508785, "survival_time": 3.599999999999995, "driven_lanedir": 0.147982538863056, "get_state_dump": 0.01524095012717051, "get_robot_state": 0.009403914621431534, "sim_render-ego0": 0.009332898544938595, "get_duckie_state": 3.566480662724743e-06, "in-drivable-lane": 2.749999999999995, "deviation-heading": 0.302563768091162, "agent_compute-ego0": 0.8574547408378288, "complete-iteration": 1.191720283194764, "set_robot_commands": 0.005638700641997873, "deviation-center-line": 0.04521699252209115, "driven_lanedir_consec": 0.147982538863056, "sim_compute_sim_state": 0.020575977351567517, "sim_compute_performance-ego0": 0.005476624998327804}, "LF-norm-small_loop-000-ego0": {"driven_any": 2.5127280143679083, "get_ui_image": 0.05234098960371578, "step_physics": 0.16576771350467906, "survival_time": 6.749999999999984, "driven_lanedir": 0.42756338037792774, "get_state_dump": 0.02421845933970283, "get_robot_state": 0.010557176435694976, "sim_render-ego0": 0.010224989231894998, "get_duckie_state": 3.8707957548253674e-06, "in-drivable-lane": 4.949999999999987, "deviation-heading": 1.0170281073783245, "agent_compute-ego0": 0.031928409548366773, "complete-iteration": 0.32492175172356996, "set_robot_commands": 0.006413673653322107, "deviation-center-line": 0.1218163574178036, "driven_lanedir_consec": 0.42756338037792774, "sim_compute_sim_state": 0.01785963773727417, "sim_compute_performance-ego0": 0.0054282090243171245}}
set_robot_commands_max0.0067776630972033345
set_robot_commands_mean0.006361405715427089
set_robot_commands_median0.006514629561253572
set_robot_commands_min0.005638700641997873
sim_compute_performance-ego0_max0.007030560591510523
sim_compute_performance-ego0_mean0.005858045786582842
sim_compute_performance-ego0_median0.00548670676525186
sim_compute_performance-ego0_min0.0054282090243171245
sim_compute_sim_state_max0.024953216035789417
sim_compute_sim_state_mean0.021038789461577372
sim_compute_sim_state_median0.020671152036622953
sim_compute_sim_state_min0.01785963773727417
sim_render-ego0_max0.010752518126305111
sim_render-ego0_mean0.01015684764472343
sim_render-ego0_median0.010270986953825004
sim_render-ego0_min0.009332898544938595
simulation-passed1
step_physics_max0.276834594869168
step_physics_mean0.20505291918141116
step_physics_median0.19077941557988343
step_physics_min0.16181825069670985
survival_time_max6.749999999999984
survival_time_mean5.07499999999999
survival_time_min3.599999999999995
No reset possible
6229513714Frank (Chude) Qian 🇨🇦CBC Net v1aido-LFP_full-sim-validation352successyesreg020:11:25
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median8.774999999999991
in-drivable-lane_median0.1999999999999993
driven_lanedir_consec_median2.825400245305862
deviation-center-line_median0.5774585366662469


other stats
agent_compute-ego0_max0.10348106593620486
agent_compute-ego0_mean0.0987295627018609
agent_compute-ego0_median0.09882939590003512
agent_compute-ego0_min0.09377839307116854
complete-iteration_max0.5795385809058687
complete-iteration_mean0.5043370811872153
complete-iteration_median0.5225426581975179
complete-iteration_min0.3927244274479568
deviation-center-line_max0.8348774251354573
deviation-center-line_mean0.5330636690236606
deviation-center-line_min0.14246017762669144
deviation-heading_max3.135654468709013
deviation-heading_mean1.9348831674892233
deviation-heading_median2.150519199210233
deviation-heading_min0.30283980282741496
driven_any_max4.340091312692358
driven_any_mean2.800310061195733
driven_any_median3.1703641673475187
driven_any_min0.5204205973955357
driven_lanedir_consec_max4.003504886630219
driven_lanedir_consec_mean2.5411955013358556
driven_lanedir_consec_min0.5104766281014785
driven_lanedir_max4.003504886630219
driven_lanedir_mean2.5411955013358556
driven_lanedir_median2.825400245305862
driven_lanedir_min0.5104766281014785
get_duckie_state_max0.07544500653336687
get_duckie_state_mean0.05759446976864549
get_duckie_state_median0.07022456152718845
get_duckie_state_min0.014483749486838176
get_robot_state_max0.0146431515856487
get_robot_state_mean0.013359385971032372
get_robot_state_median0.013502494354916223
get_robot_state_min0.011789403588648335
get_state_dump_max0.039120993963102015
get_state_dump_mean0.02708196556384765
get_state_dump_median0.02599066244372411
get_state_dump_min0.01722554340484036
get_ui_image_max0.0629778221783885
get_ui_image_mean0.05643318920008112
get_ui_image_median0.05886552190234734
get_ui_image_min0.045023890817241306
in-drivable-lane_max1.3000000000000025
in-drivable-lane_mean0.42500000000000027
in-drivable-lane_min0.0
per-episodes
details{"LFP-full-loop-000-ego0": {"driven_any": 0.5204205973955357, "get_ui_image": 0.0570657136963635, "step_physics": 0.13611192819548815, "survival_time": 2.000000000000001, "driven_lanedir": 0.5104766281014785, "get_state_dump": 0.039120993963102015, "get_robot_state": 0.0146431515856487, "sim_render-ego0": 0.00940521751962057, "get_duckie_state": 0.07544500653336687, "in-drivable-lane": 0.0, "deviation-heading": 0.30283980282741496, "agent_compute-ego0": 0.10348106593620486, "complete-iteration": 0.47540888553712424, "set_robot_commands": 0.007976212152620642, "deviation-center-line": 0.14246017762669144, "driven_lanedir_consec": 0.5104766281014785, "sim_compute_sim_state": 0.022944962106099944, "sim_compute_performance-ego0": 0.008977256170133265}, "LFP-full-zigzag-000-ego0": {"driven_any": 4.340091312692358, "get_ui_image": 0.0629778221783885, "step_physics": 0.26277361163105145, "survival_time": 12.500000000000044, "driven_lanedir": 4.003504886630219, "get_state_dump": 0.02269103802532789, "get_robot_state": 0.011789403588648335, "sim_render-ego0": 0.01116732297190632, "get_duckie_state": 0.06631752599283043, "in-drivable-lane": 0.3999999999999986, "deviation-heading": 3.135654468709013, "agent_compute-ego0": 0.0959112558706823, "complete-iteration": 0.5795385809058687, "set_robot_commands": 0.008029292779139788, "deviation-center-line": 0.8348774251354573, "driven_lanedir_consec": 4.003504886630219, "sim_compute_sim_state": 0.03134538547926215, "sim_compute_performance-ego0": 0.006298072784545412}, "LFP-full-techtrack-000-ego0": {"driven_any": 3.5490382518365102, "get_ui_image": 0.060665330108331174, "step_physics": 0.2321964173900838, "survival_time": 9.750000000000004, "driven_lanedir": 2.9522887890193164, "get_state_dump": 0.029290286862120336, "get_robot_state": 0.013143479824066162, "sim_render-ego0": 0.010413644265155404, "get_duckie_state": 0.07413159706154648, "in-drivable-lane": 1.3000000000000025, "deviation-heading": 2.503361387250735, "agent_compute-ego0": 0.10174753592938791, "complete-iteration": 0.5696764308579114, "set_robot_commands": 0.007436805841874103, "deviation-center-line": 0.5039451131855264, "driven_lanedir_consec": 2.9522887890193164, "sim_compute_sim_state": 0.032723426818847656, "sim_compute_performance-ego0": 0.00768236238129285}, "LFP-full-small_loop-000-ego0": {"driven_any": 2.791690082858527, "get_ui_image": 0.045023890817241306, "step_physics": 0.16623987058165726, "survival_time": 7.79999999999998, "driven_lanedir": 2.698511701592407, "get_state_dump": 0.01722554340484036, "get_robot_state": 0.01386150888576629, "sim_render-ego0": 0.010792167323410132, "get_duckie_state": 0.014483749486838176, "in-drivable-lane": 0.0, "deviation-heading": 1.79767701116973, "agent_compute-ego0": 0.09377839307116854, "complete-iteration": 0.3927244274479568, "set_robot_commands": 0.008440363938641397, "deviation-center-line": 0.6509719601469675, "driven_lanedir_consec": 2.698511701592407, "sim_compute_sim_state": 0.016323335611136854, "sim_compute_performance-ego0": 0.00632884547968579}}
set_robot_commands_max0.008440363938641397
set_robot_commands_mean0.007970668678068983
set_robot_commands_median0.008002752465880215
set_robot_commands_min0.007436805841874103
sim_compute_performance-ego0_max0.008977256170133265
sim_compute_performance-ego0_mean0.0073216342039143295
sim_compute_performance-ego0_median0.00700560393048932
sim_compute_performance-ego0_min0.006298072784545412
sim_compute_sim_state_max0.032723426818847656
sim_compute_sim_state_mean0.025834277503836652
sim_compute_sim_state_median0.027145173792681047
sim_compute_sim_state_min0.016323335611136854
sim_render-ego0_max0.01116732297190632
sim_render-ego0_mean0.010444588020023106
sim_render-ego0_median0.010602905794282767
sim_render-ego0_min0.00940521751962057
simulation-passed1
step_physics_max0.26277361163105145
step_physics_mean0.19933045694957016
step_physics_median0.1992181439858705
step_physics_min0.13611192819548815
survival_time_max12.500000000000044
survival_time_mean8.012500000000006
survival_time_min2.000000000000001
No reset possible
6229013719Frank (Chude) Qian 🇨🇦CBC Net v1aido-LF_full-sim-validation349successyesreg020:14:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.024999999999993
in-drivable-lane_median2.1749999999999927
driven_lanedir_consec_median0.7879375916748629
deviation-center-line_median0.17229705763589412


other stats
agent_compute-ego0_max0.10827696323394775
agent_compute-ego0_mean0.09907582793792669
agent_compute-ego0_median0.09794876236626908
agent_compute-ego0_min0.09212882378522089
complete-iteration_max0.4731710507319524
complete-iteration_mean0.4254564987668311
complete-iteration_median0.4332775186789434
complete-iteration_min0.3620999069774852
deviation-center-line_max2.241378044644111
deviation-center-line_mean0.6770129712603397
deviation-center-line_min0.12207972512545902
deviation-heading_max9.88629981298562
deviation-heading_mean2.902386442546943
deviation-heading_median0.7123049320907395
deviation-heading_min0.29863609302067395
driven_any_max17.024626820939837
driven_any_mean5.172321739100612
driven_any_median1.3073280278517272
driven_any_min1.0500040797591588
driven_lanedir_consec_max10.738907315219562
driven_lanedir_consec_mean3.230857353878121
driven_lanedir_consec_min0.6086469169431961
driven_lanedir_max10.738907315219562
driven_lanedir_mean3.230857353878121
driven_lanedir_median0.7879375916748629
driven_lanedir_min0.6086469169431961
get_duckie_state_max4.747334648581112e-06
get_duckie_state_mean4.578952101278567e-06
get_duckie_state_median4.545241888626904e-06
get_duckie_state_min4.477989979279347e-06
get_robot_state_max0.014442358261499647
get_robot_state_mean0.012560278262109922
get_robot_state_median0.012191898870215628
get_robot_state_min0.01141495704650879
get_state_dump_max0.03142377352103209
get_state_dump_mean0.021394682290147692
get_state_dump_median0.019683009736678175
get_state_dump_min0.01478893616620232
get_ui_image_max0.06977898340958816
get_ui_image_mean0.05550555526046111
get_ui_image_median0.055051799379774335
get_ui_image_min0.042139638872707594
in-drivable-lane_max13.399999999999968
in-drivable-lane_mean4.587499999999988
in-drivable-lane_min0.6000000000000005
per-episodes
details{"LF-full-loop-000-ego0": {"driven_any": 17.024626820939837, "get_ui_image": 0.04990483171458372, "step_physics": 0.21928692560126312, "survival_time": 41.049999999999805, "driven_lanedir": 10.738907315219562, "get_state_dump": 0.015113472938537598, "get_robot_state": 0.012077240758278656, "sim_render-ego0": 0.010251507851909257, "get_duckie_state": 4.496017511743699e-06, "in-drivable-lane": 13.399999999999968, "deviation-heading": 9.88629981298562, "agent_compute-ego0": 0.0938885368569924, "complete-iteration": 0.4384466537303878, "set_robot_commands": 0.00809677325896103, "deviation-center-line": 2.241378044644111, "driven_lanedir_consec": 10.738907315219562, "sim_compute_sim_state": 0.02321234932781136, "sim_compute_performance-ego0": 0.006400990080079313}, "LF-full-zigzag-000-ego0": {"driven_any": 1.271625413240163, "get_ui_image": 0.06977898340958816, "step_physics": 0.19659143533462137, "survival_time": 3.8499999999999943, "driven_lanedir": 0.6086469169431961, "get_state_dump": 0.03142377352103209, "get_robot_state": 0.014442358261499647, "sim_render-ego0": 0.010668433629549466, "get_duckie_state": 4.477989979279347e-06, "in-drivable-lane": 2.1499999999999932, "deviation-heading": 0.5902105130620672, "agent_compute-ego0": 0.10827696323394775, "complete-iteration": 0.4731710507319524, "set_robot_commands": 0.010204843985728728, "deviation-center-line": 0.13545176697895162, "driven_lanedir_consec": 0.6086469169431961, "sim_compute_sim_state": 0.02627607186635335, "sim_compute_performance-ego0": 0.005288891303233611}, "LF-full-techtrack-000-ego0": {"driven_any": 1.3430306424632912, "get_ui_image": 0.06019876704496496, "step_physics": 0.17432292769936955, "survival_time": 4.199999999999993, "driven_lanedir": 0.8253719119336291, "get_state_dump": 0.02425254653481876, "get_robot_state": 0.01141495704650879, "sim_render-ego0": 0.01258417578304515, "get_duckie_state": 4.59446626551011e-06, "in-drivable-lane": 2.199999999999992, "deviation-heading": 0.29863609302067395, "agent_compute-ego0": 0.10200898787554571, "complete-iteration": 0.42810838362749887, "set_robot_commands": 0.007692937289967256, "deviation-center-line": 0.12207972512545902, "driven_lanedir_consec": 0.8253719119336291, "sim_compute_sim_state": 0.02786695536445169, "sim_compute_performance-ego0": 0.00753470028147978}, "LF-full-small_loop-000-ego0": {"driven_any": 1.0500040797591588, "get_ui_image": 0.042139638872707594, "step_physics": 0.1608806673218222, "survival_time": 3.349999999999996, "driven_lanedir": 0.7505032714160966, "get_state_dump": 0.01478893616620232, "get_robot_state": 0.012306556982152602, "sim_render-ego0": 0.010185395970064052, "get_duckie_state": 4.747334648581112e-06, "in-drivable-lane": 0.6000000000000005, "deviation-heading": 0.834399351119412, "agent_compute-ego0": 0.09212882378522089, "complete-iteration": 0.3620999069774852, "set_robot_commands": 0.00763858065885656, "deviation-center-line": 0.20914234829283665, "driven_lanedir_consec": 0.7505032714160966, "sim_compute_sim_state": 0.01485281130846809, "sim_compute_performance-ego0": 0.006971850114710191}}
set_robot_commands_max0.010204843985728728
set_robot_commands_mean0.008408283798378394
set_robot_commands_median0.007894855274464143
set_robot_commands_min0.00763858065885656
sim_compute_performance-ego0_max0.00753470028147978
sim_compute_performance-ego0_mean0.006549107944875724
sim_compute_performance-ego0_median0.006686420097394752
sim_compute_performance-ego0_min0.005288891303233611
sim_compute_sim_state_max0.02786695536445169
sim_compute_sim_state_mean0.02305204696677112
sim_compute_sim_state_median0.024744210597082355
sim_compute_sim_state_min0.01485281130846809
sim_render-ego0_max0.01258417578304515
sim_render-ego0_mean0.010922378308641982
sim_render-ego0_median0.010459970740729362
sim_render-ego0_min0.010185395970064052
simulation-passed1
step_physics_max0.21928692560126312
step_physics_mean0.18777048898926907
step_physics_median0.18545718151699545
step_physics_min0.1608806673218222
survival_time_max41.049999999999805
survival_time_mean13.112499999999947
survival_time_min3.349999999999996
No reset possible
6228913720Frank (Chude) Qian 🇨🇦CBC Net v1aido-LFP_full-sim-validation352successyesreg020:08:51
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.024999999999986
in-drivable-lane_median2.8249999999999904
driven_lanedir_consec_median0.8269966491210599
deviation-center-line_median0.1259313496227091


other stats
agent_compute-ego0_max0.09563922771187716
agent_compute-ego0_mean0.09079511300773374
agent_compute-ego0_median0.08989148755376183
agent_compute-ego0_min0.08775824921153416
complete-iteration_max0.561736648027287
complete-iteration_mean0.46713720798029806
complete-iteration_median0.47170164318461166
complete-iteration_min0.36340889752468214
deviation-center-line_max0.630121582546472
deviation-center-line_mean0.23068530597329995
deviation-center-line_min0.040756942101309514
deviation-heading_max3.294038201636309
deviation-heading_mean1.289765234924316
deviation-heading_median0.7439285008988457
deviation-heading_min0.3771657362632638
driven_any_max4.21889229608998
driven_any_mean2.0948458544202073
driven_any_median1.8642067941486635
driven_any_min0.4320775332935221
driven_lanedir_consec_max2.981373575777413
driven_lanedir_consec_mean1.2654977234543054
driven_lanedir_consec_min0.42662401979768894
driven_lanedir_max2.981373575777413
driven_lanedir_mean1.2654977234543054
driven_lanedir_median0.8269966491210599
driven_lanedir_min0.42662401979768894
get_duckie_state_max0.07678612282401637
get_duckie_state_mean0.0544892500819881
get_duckie_state_median0.06361064693996257
get_duckie_state_min0.013949583624010888
get_robot_state_max0.012859080967150237
get_robot_state_mean0.01141594907530311
get_robot_state_median0.011187456363895032
get_robot_state_min0.010429802606272144
get_state_dump_max0.02446573390517124
get_state_dump_mean0.021776353936443935
get_state_dump_median0.022223522604065415
get_state_dump_min0.01819263663247367
get_ui_image_max0.06388769815134447
get_ui_image_mean0.054166081834400304
get_ui_image_median0.05473994549029383
get_ui_image_min0.04329673820566908
in-drivable-lane_max4.149999999999985
in-drivable-lane_mean2.4499999999999913
in-drivable-lane_min0.0
per-episodes
details{"LFP-full-loop-000-ego0": {"driven_any": 0.4320775332935221, "get_ui_image": 0.05183640906685277, "step_physics": 0.14851124663102, "survival_time": 1.850000000000001, "driven_lanedir": 0.42662401979768894, "get_state_dump": 0.022768779804832055, "get_robot_state": 0.012859080967150237, "sim_render-ego0": 0.012773739664178146, "get_duckie_state": 0.07678612282401637, "in-drivable-lane": 0.0, "deviation-heading": 0.3771657362632638, "agent_compute-ego0": 0.08885943262200606, "complete-iteration": 0.4549732333735416, "set_robot_commands": 0.011597627087643272, "deviation-center-line": 0.040756942101309514, "driven_lanedir_consec": 0.42662401979768894, "sim_compute_sim_state": 0.02338736935665733, "sim_compute_performance-ego0": 0.005384533028853567}, "LFP-full-zigzag-000-ego0": {"driven_any": 4.21889229608998, "get_ui_image": 0.06388769815134447, "step_physics": 0.2489953029987424, "survival_time": 10.700000000000015, "driven_lanedir": 2.981373575777413, "get_state_dump": 0.02446573390517124, "get_robot_state": 0.010429802606272144, "sim_render-ego0": 0.010167670804400775, "get_duckie_state": 0.06478191530981729, "in-drivable-lane": 2.0999999999999925, "deviation-heading": 3.294038201636309, "agent_compute-ego0": 0.09563922771187716, "complete-iteration": 0.561736648027287, "set_robot_commands": 0.007871475885080737, "deviation-center-line": 0.630121582546472, "driven_lanedir_consec": 2.981373575777413, "sim_compute_sim_state": 0.029934361923572628, "sim_compute_performance-ego0": 0.0053369688433270125}, "LFP-full-techtrack-000-ego0": {"driven_any": 2.097289998344914, "get_ui_image": 0.05764348191373488, "step_physics": 0.1953072951120489, "survival_time": 6.749999999999984, "driven_lanedir": 1.0606168404099838, "get_state_dump": 0.02167826540329877, "get_robot_state": 0.010635204174939324, "sim_render-ego0": 0.009566086180069868, "get_duckie_state": 0.062439378570107854, "in-drivable-lane": 4.149999999999985, "deviation-heading": 0.8779243065232901, "agent_compute-ego0": 0.0909235424855176, "complete-iteration": 0.48843005299568176, "set_robot_commands": 0.006667151170618394, "deviation-center-line": 0.16066097541082722, "driven_lanedir_consec": 1.0606168404099838, "sim_compute_sim_state": 0.027801310314851647, "sim_compute_performance-ego0": 0.00554812655729406}, "LFP-full-small_loop-000-ego0": {"driven_any": 1.6311235899524137, "get_ui_image": 0.04329673820566908, "step_physics": 0.1489993991138779, "survival_time": 5.299999999999989, "driven_lanedir": 0.5933764578321359, "get_state_dump": 0.01819263663247367, "get_robot_state": 0.01173970855285074, "sim_render-ego0": 0.009835067196427103, "get_duckie_state": 0.013949583624010888, "in-drivable-lane": 3.5499999999999883, "deviation-heading": 0.6099326952744012, "agent_compute-ego0": 0.08775824921153416, "complete-iteration": 0.36340889752468214, "set_robot_commands": 0.00705861822466984, "deviation-center-line": 0.09120172383459098, "driven_lanedir_consec": 0.5933764578321359, "sim_compute_sim_state": 0.016355755173157308, "sim_compute_performance-ego0": 0.006012000770212334}}
set_robot_commands_max0.011597627087643272
set_robot_commands_mean0.008298718092003061
set_robot_commands_median0.007465047054875289
set_robot_commands_min0.006667151170618394
sim_compute_performance-ego0_max0.006012000770212334
sim_compute_performance-ego0_mean0.005570407299921743
sim_compute_performance-ego0_median0.005466329793073814
sim_compute_performance-ego0_min0.0053369688433270125
sim_compute_sim_state_max0.029934361923572628
sim_compute_sim_state_mean0.02436969919205973
sim_compute_sim_state_median0.025594339835754487
sim_compute_sim_state_min0.016355755173157308
sim_render-ego0_max0.012773739664178146
sim_render-ego0_mean0.010585640961268974
sim_render-ego0_median0.01000136900041394
sim_render-ego0_min0.009566086180069868
simulation-passed1
step_physics_max0.2489953029987424
step_physics_mean0.1854533109639223
step_physics_median0.1721533471129634
step_physics_min0.14851124663102
survival_time_max10.700000000000015
survival_time_mean6.149999999999999
survival_time_min1.850000000000001
No reset possible
6228713722Frank (Chude) Qian 🇨🇦CBC Net v1aido-LFP_full-sim-validation352successyesreg020:12:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.774999999999984
in-drivable-lane_median0.0
driven_lanedir_consec_median2.278087553074077
deviation-center-line_median0.4807952762920742


other stats
agent_compute-ego0_max0.10121784806251526
agent_compute-ego0_mean0.09494033653578732
agent_compute-ego0_median0.09487707093066496
agent_compute-ego0_min0.08878935621930407
complete-iteration_max0.5822893796682964
complete-iteration_mean0.5002246754571094
complete-iteration_median0.5148648651958514
complete-iteration_min0.38887959176843817
deviation-center-line_max1.4589403956416636
deviation-center-line_mean0.627343167131525
deviation-center-line_min0.08884172030028836
deviation-heading_max5.378570894476008
deviation-heading_mean2.276270884533667
deviation-heading_median1.5757416446331307
deviation-heading_min0.5750293543923994
driven_any_max7.60384739383258
driven_any_mean3.192159122063889
driven_any_median2.3572454967009975
driven_any_min0.4502981010209801
driven_lanedir_consec_max5.640815264904232
driven_lanedir_consec_mean2.658801641156088
driven_lanedir_consec_min0.43821619357196706
driven_lanedir_max5.640815264904232
driven_lanedir_mean2.658801641156088
driven_lanedir_median2.278087553074077
driven_lanedir_min0.43821619357196706
get_duckie_state_max0.08295401334762573
get_duckie_state_mean0.05734393332384033
get_duckie_state_median0.06608422520269416
get_duckie_state_min0.0142532695423473
get_robot_state_max0.012852996587753296
get_robot_state_mean0.01235088413082346
get_robot_state_median0.01229501189486137
get_robot_state_min0.011960516145817803
get_state_dump_max0.027388477325439455
get_state_dump_mean0.022820863714309664
get_state_dump_median0.023287211002929235
get_state_dump_min0.017320555525940733
get_ui_image_max0.06482338844668167
get_ui_image_mean0.05696318747863463
get_ui_image_median0.05917992549283164
get_ui_image_min0.044669510482193586
in-drivable-lane_max4.000000000000035
in-drivable-lane_mean1.0000000000000089
in-drivable-lane_min0.0
per-episodes
details{"LFP-full-loop-000-ego0": {"driven_any": 0.4502981010209801, "get_ui_image": 0.057334572076797485, "step_physics": 0.1622810840606689, "survival_time": 1.950000000000001, "driven_lanedir": 0.43821619357196706, "get_state_dump": 0.027388477325439455, "get_robot_state": 0.012852996587753296, "sim_render-ego0": 0.010476064682006837, "get_duckie_state": 0.08295401334762573, "in-drivable-lane": 0.0, "deviation-heading": 0.5750293543923994, "agent_compute-ego0": 0.10121784806251526, "complete-iteration": 0.48843986392021177, "set_robot_commands": 0.00826951265335083, "deviation-center-line": 0.08884172030028836, "driven_lanedir_consec": 0.43821619357196706, "sim_compute_sim_state": 0.018958240747451786, "sim_compute_performance-ego0": 0.006482368707656861}, "LFP-full-zigzag-000-ego0": {"driven_any": 7.60384739383258, "get_ui_image": 0.06482338844668167, "step_physics": 0.2591141617025128, "survival_time": 19.600000000000144, "driven_lanedir": 5.640815264904232, "get_state_dump": 0.023479517482922586, "get_robot_state": 0.011960516145817803, "sim_render-ego0": 0.01080367704687531, "get_duckie_state": 0.06922192245949316, "in-drivable-lane": 4.000000000000035, "deviation-heading": 5.378570894476008, "agent_compute-ego0": 0.09617683905681582, "complete-iteration": 0.5822893796682964, "set_robot_commands": 0.007355242285109659, "deviation-center-line": 1.4589403956416636, "driven_lanedir_consec": 5.640815264904232, "sim_compute_sim_state": 0.032359386521744665, "sim_compute_performance-ego0": 0.006767213496239737}, "LFP-full-techtrack-000-ego0": {"driven_any": 2.027674569714446, "get_ui_image": 0.06102527890886579, "step_physics": 0.2353677409035819, "survival_time": 5.899999999999987, "driven_lanedir": 1.9497400554267788, "get_state_dump": 0.023094904522935885, "get_robot_state": 0.012342210577315644, "sim_render-ego0": 0.008655327708781267, "get_duckie_state": 0.06294652794589516, "in-drivable-lane": 0.0, "deviation-heading": 1.4055795151056212, "agent_compute-ego0": 0.09357730280451414, "complete-iteration": 0.5412898664714909, "set_robot_commands": 0.008679391957130753, "deviation-center-line": 0.4391191575503109, "driven_lanedir_consec": 1.9497400554267788, "sim_compute_sim_state": 0.029546415104585538, "sim_compute_performance-ego0": 0.00582171488208931}, "LFP-full-small_loop-000-ego0": {"driven_any": 2.6868164236875485, "get_ui_image": 0.044669510482193586, "step_physics": 0.17044262452559036, "survival_time": 7.649999999999981, "driven_lanedir": 2.606435050721375, "get_state_dump": 0.017320555525940733, "get_robot_state": 0.0122478132124071, "sim_render-ego0": 0.010663269402144793, "get_duckie_state": 0.0142532695423473, "in-drivable-lane": 0.0, "deviation-heading": 1.7459037741606402, "agent_compute-ego0": 0.08878935621930407, "complete-iteration": 0.38887959176843817, "set_robot_commands": 0.007512682444089419, "deviation-center-line": 0.5224713950338374, "driven_lanedir_consec": 2.606435050721375, "sim_compute_sim_state": 0.015731874998513754, "sim_compute_performance-ego0": 0.007026749771910828}}
set_robot_commands_max0.008679391957130753
set_robot_commands_mean0.007954207334920164
set_robot_commands_median0.007891097548720125
set_robot_commands_min0.007355242285109659
sim_compute_performance-ego0_max0.007026749771910828
sim_compute_performance-ego0_mean0.006524511714474184
sim_compute_performance-ego0_median0.006624791101948298
sim_compute_performance-ego0_min0.00582171488208931
sim_compute_sim_state_max0.032359386521744665
sim_compute_sim_state_mean0.02414897934307393
sim_compute_sim_state_median0.024252327926018655
sim_compute_sim_state_min0.015731874998513754
sim_render-ego0_max0.01080367704687531
sim_render-ego0_mean0.010149584709952052
sim_render-ego0_median0.010569667042075811
sim_render-ego0_min0.008655327708781267
simulation-passed1
step_physics_max0.2591141617025128
step_physics_mean0.2068014027980885
step_physics_median0.20290518271458613
step_physics_min0.1622810840606689
survival_time_max19.600000000000144
survival_time_mean8.775000000000029
survival_time_min1.950000000000001
No reset possible
6227213505András Kalapos 🇭🇺real-v1.0-3091-310aido-LF-sim-validation347successyesreg020:58:09
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.770455989580604
survival_time_median59.99999999999873
deviation-center-line_median2.5276460017285243
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.03894159537767192
agent_compute-ego0_mean0.0346000954868593
agent_compute-ego0_median0.03371647643010682
agent_compute-ego0_min0.03202583370955163
complete-iteration_max0.4596279431739318
complete-iteration_mean0.3984132732181724
complete-iteration_median0.40063228684599256
complete-iteration_min0.3327605760067726
deviation-center-line_max2.6336778637309397
deviation-center-line_mean2.334584824550065
deviation-center-line_min1.6493694310122735
deviation-heading_max8.019650448742077
deviation-heading_mean7.2023007494639915
deviation-heading_median7.343658708102941
deviation-heading_min6.102235132908008
driven_any_max30.972736379731742
driven_any_mean29.425421807233526
driven_any_median29.034047566629088
driven_any_min28.66085571594418
driven_lanedir_consec_max30.61801965515254
driven_lanedir_consec_mean29.1190053344364
driven_lanedir_consec_min28.31708970343186
driven_lanedir_max30.61801965515254
driven_lanedir_mean29.1190053344364
driven_lanedir_median28.770455989580604
driven_lanedir_min28.31708970343186
get_duckie_state_max3.751767465812181e-06
get_duckie_state_mean3.2565675111337066e-06
get_duckie_state_median3.1537358508717504e-06
get_duckie_state_min2.967030876979145e-06
get_robot_state_max0.015217837644159349
get_robot_state_mean0.013178180893890864
get_robot_state_median0.01278751954547968
get_robot_state_min0.01191984684044475
get_state_dump_max0.02389870059182503
get_state_dump_mean0.01908654287196913
get_state_dump_median0.01907422113775909
get_state_dump_min0.014299028620533304
get_ui_image_max0.06250298152259744
get_ui_image_mean0.05378632442242497
get_ui_image_median0.05405138057038548
get_ui_image_min0.044539555026331515
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 30.972736379731742, "get_ui_image": 0.05189087527876194, "step_physics": 0.19513519657938605, "survival_time": 59.99999999999873, "driven_lanedir": 30.61801965515254, "get_state_dump": 0.02389870059182503, "get_robot_state": 0.015217837644159349, "sim_render-ego0": 0.011205360950974998, "get_duckie_state": 2.967030876979145e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.1091503763884445, "agent_compute-ego0": 0.03894159537767192, "complete-iteration": 0.37890622994981144, "set_robot_commands": 0.009243262598258473, "deviation-center-line": 2.555440262768427, "driven_lanedir_consec": 30.61801965515254, "sim_compute_sim_state": 0.025543088023608967, "sim_compute_performance-ego0": 0.007622441086145761}, "LF-norm-zigzag-000-ego0": {"driven_any": 28.69556498806268, "get_ui_image": 0.06250298152259744, "step_physics": 0.27703158682728685, "survival_time": 59.99999999999873, "driven_lanedir": 28.31708970343186, "get_state_dump": 0.01797840259752107, "get_robot_state": 0.012615224503160615, "sim_render-ego0": 0.010434326780130228, "get_duckie_state": 3.751767465812181e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.019650448742077, "agent_compute-ego0": 0.03202583370955163, "complete-iteration": 0.4596279431739318, "set_robot_commands": 0.00789400302400994, "deviation-center-line": 2.4998517406886216, "driven_lanedir_consec": 28.31708970343186, "sim_compute_sim_state": 0.03214480815382425, "sim_compute_performance-ego0": 0.006795602277554044}, "LF-norm-techtrack-000-ego0": {"driven_any": 29.372530145195498, "get_ui_image": 0.056211885862009016, "step_physics": 0.24049922091875545, "survival_time": 59.99999999999873, "driven_lanedir": 29.050999186927157, "get_state_dump": 0.020170039677997117, "get_robot_state": 0.012959814587798742, "sim_render-ego0": 0.01039627211774815, "get_duckie_state": 3.110558464564848e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.578167039817438, "agent_compute-ego0": 0.035182890348093, "complete-iteration": 0.4223583437421737, "set_robot_commands": 0.008234767492962916, "deviation-center-line": 2.6336778637309397, "driven_lanedir_consec": 29.050999186927157, "sim_compute_sim_state": 0.03157020131316808, "sim_compute_performance-ego0": 0.0069148564318832414}, "LF-norm-small_loop-000-ego0": {"driven_any": 28.66085571594418, "get_ui_image": 0.044539555026331515, "step_physics": 0.18923699548103529, "survival_time": 59.99999999999873, "driven_lanedir": 28.489912792234048, "get_state_dump": 0.014299028620533304, "get_robot_state": 0.01191984684044475, "sim_render-ego0": 0.009578888064915692, "get_duckie_state": 3.196913237178653e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.102235132908008, "agent_compute-ego0": 0.032250062512120634, "complete-iteration": 0.3327605760067726, "set_robot_commands": 0.0077329970319304836, "deviation-center-line": 1.6493694310122735, "driven_lanedir_consec": 28.489912792234048, "sim_compute_sim_state": 0.01698791434822432, "sim_compute_performance-ego0": 0.00600220480131964}}
set_robot_commands_max0.009243262598258473
set_robot_commands_mean0.008276257536790453
set_robot_commands_median0.008064385258486427
set_robot_commands_min0.0077329970319304836
sim_compute_performance-ego0_max0.007622441086145761
sim_compute_performance-ego0_mean0.0068337761492256715
sim_compute_performance-ego0_median0.0068552293547186425
sim_compute_performance-ego0_min0.00600220480131964
sim_compute_sim_state_max0.03214480815382425
sim_compute_sim_state_mean0.026561502959706405
sim_compute_sim_state_median0.028556644668388524
sim_compute_sim_state_min0.01698791434822432
sim_render-ego0_max0.011205360950974998
sim_render-ego0_mean0.010403711978442266
sim_render-ego0_median0.010415299448939187
sim_render-ego0_min0.009578888064915692
simulation-passed1
step_physics_max0.27703158682728685
step_physics_mean0.22547574995161593
step_physics_median0.2178172087490707
step_physics_min0.18923699548103529
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6226913550András Kalapos 🇭🇺real-v0.9-3092-363aido-LF-sim-validation347successyesreg020:59:01
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.212188486805196
survival_time_median59.99999999999873
deviation-center-line_median2.62676217493598
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.03353529607723595
agent_compute-ego0_mean0.03254528104216729
agent_compute-ego0_median0.03249515016112697
agent_compute-ego0_min0.03165552776917927
complete-iteration_max0.4571869561912416
complete-iteration_mean0.38963690874082263
complete-iteration_median0.3856349245495443
complete-iteration_min0.33009082967296033
deviation-center-line_max2.932511547707182
deviation-center-line_mean2.5941046265155183
deviation-center-line_min2.190382608482932
deviation-heading_max8.427061044071385
deviation-heading_mean6.693100728696232
deviation-heading_median6.4702035125765915
deviation-heading_min5.40493484556036
driven_any_max28.544109959532868
driven_any_mean27.58993952637571
driven_any_median27.45047824301168
driven_any_min26.914691659946595
driven_lanedir_consec_max28.38493863519559
driven_lanedir_consec_mean27.382363538733344
driven_lanedir_consec_min26.720138546127387
driven_lanedir_max28.38493863519559
driven_lanedir_mean27.382363538733344
driven_lanedir_median27.212188486805196
driven_lanedir_min26.720138546127387
get_duckie_state_max2.963854609480706e-06
get_duckie_state_mean2.916111338644798e-06
get_duckie_state_median2.925143849343483e-06
get_duckie_state_min2.850303046411519e-06
get_robot_state_max0.012690703735859765
get_robot_state_mean0.012091052968932826
get_robot_state_median0.012169817702954852
get_robot_state_min0.01133387273396183
get_state_dump_max0.01895506832621477
get_state_dump_mean0.017747191118261004
get_state_dump_median0.017827152610321428
get_state_dump_min0.01637939092618639
get_ui_image_max0.06305981158813172
get_ui_image_mean0.05448211619101596
get_ui_image_median0.05422077716935386
get_ui_image_min0.04642709883722437
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 28.544109959532868, "get_ui_image": 0.05193700163092443, "step_physics": 0.19412544188551065, "survival_time": 59.99999999999873, "driven_lanedir": 28.38493863519559, "get_state_dump": 0.01895506832621477, "get_robot_state": 0.012690703735859765, "sim_render-ego0": 0.010178296393299976, "get_duckie_state": 2.9271290165300077e-06, "in-drivable-lane": 0.0, "deviation-heading": 5.40493484556036, "agent_compute-ego0": 0.032621991525184704, "complete-iteration": 0.35923256147513283, "set_robot_commands": 0.0077917831922748705, "deviation-center-line": 2.190382608482932, "driven_lanedir_consec": 28.38493863519559, "sim_compute_sim_state": 0.02423768039547732, "sim_compute_performance-ego0": 0.0065021125799809565}, "LF-norm-zigzag-000-ego0": {"driven_any": 26.914691659946595, "get_ui_image": 0.06305981158813172, "step_physics": 0.27352220152538087, "survival_time": 59.99999999999873, "driven_lanedir": 26.720138546127387, "get_state_dump": 0.01828465354531929, "get_robot_state": 0.012363203757013708, "sim_render-ego0": 0.009849981304807132, "get_duckie_state": 2.923158682156959e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.542717789450528, "agent_compute-ego0": 0.03353529607723595, "complete-iteration": 0.4571869561912416, "set_robot_commands": 0.007942230278605923, "deviation-center-line": 2.452255038352363, "driven_lanedir_consec": 26.720138546127387, "sim_compute_sim_state": 0.032015177729127806, "sim_compute_performance-ego0": 0.006419741045326913}, "LF-norm-techtrack-000-ego0": {"driven_any": 27.18064622401334, "get_ui_image": 0.05650455270778329, "step_physics": 0.24223287992930032, "survival_time": 59.99999999999873, "driven_lanedir": 26.989472796761536, "get_state_dump": 0.01637939092618639, "get_robot_state": 0.011976431648896, "sim_render-ego0": 0.010298287838722248, "get_duckie_state": 2.850303046411519e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.397689235702655, "agent_compute-ego0": 0.032368308797069235, "complete-iteration": 0.41203728762395575, "set_robot_commands": 0.007201063345115846, "deviation-center-line": 2.932511547707182, "driven_lanedir_consec": 26.989472796761536, "sim_compute_sim_state": 0.02917540123023955, "sim_compute_performance-ego0": 0.005712798394132514}, "LF-norm-small_loop-000-ego0": {"driven_any": 27.72031026201002, "get_ui_image": 0.04642709883722437, "step_physics": 0.18314919860833492, "survival_time": 59.99999999999873, "driven_lanedir": 27.43490417684886, "get_state_dump": 0.017369651675323564, "get_robot_state": 0.01133387273396183, "sim_render-ego0": 0.010067184402186308, "get_duckie_state": 2.963854609480706e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.427061044071385, "agent_compute-ego0": 0.03165552776917927, "complete-iteration": 0.33009082967296033, "set_robot_commands": 0.00736519121111283, "deviation-center-line": 2.8012693115195972, "driven_lanedir_consec": 27.43490417684886, "sim_compute_sim_state": 0.01634284796861685, "sim_compute_performance-ego0": 0.006187021484184424}}
set_robot_commands_max0.007942230278605923
set_robot_commands_mean0.007575067006777367
set_robot_commands_median0.00757848720169385
set_robot_commands_min0.007201063345115846
sim_compute_performance-ego0_max0.0065021125799809565
sim_compute_performance-ego0_mean0.006205418375906202
sim_compute_performance-ego0_median0.006303381264755669
sim_compute_performance-ego0_min0.005712798394132514
sim_compute_sim_state_max0.032015177729127806
sim_compute_sim_state_mean0.025442776830865384
sim_compute_sim_state_median0.02670654081285843
sim_compute_sim_state_min0.01634284796861685
sim_render-ego0_max0.010298287838722248
sim_render-ego0_mean0.010098437484753916
sim_render-ego0_median0.010122740397743142
sim_render-ego0_min0.009849981304807132
simulation-passed1
step_physics_max0.27352220152538087
step_physics_mean0.22325743048713168
step_physics_median0.2181791609074055
step_physics_min0.18314919860833492
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6226513611Raphael Jeanmobile-segmentation-pedestrianaido-LF-sim-validation347successyesreg021:11:12
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median25.99055509912304
survival_time_median59.99999999999873
deviation-center-line_median3.227842224441896
in-drivable-lane_median0.29999999999998295


other stats
agent_compute-ego0_max0.2068780500823314
agent_compute-ego0_mean0.1616767798236367
agent_compute-ego0_median0.2024070100919293
agent_compute-ego0_min0.0350150490283569
complete-iteration_max0.6598866972498453
complete-iteration_mean0.5476804211375914
complete-iteration_median0.5426496183941705
complete-iteration_min0.4455357505121795
deviation-center-line_max3.684422663876693
deviation-center-line_mean2.9857185131203123
deviation-center-line_min1.8027669397207648
deviation-heading_max10.48757317919206
deviation-heading_mean9.53899936312267
deviation-heading_median10.040804787357429
deviation-heading_min7.586814698583766
driven_any_max27.53710913592516
driven_any_mean26.829308881564373
driven_any_median26.671985515825945
driven_any_min26.43615535868045
driven_lanedir_consec_max27.19551493750692
driven_lanedir_consec_mean25.87262346708257
driven_lanedir_consec_min24.31386873257729
driven_lanedir_max27.19551493750692
driven_lanedir_mean25.87262346708257
driven_lanedir_median25.99055509912304
driven_lanedir_min24.31386873257729
get_duckie_state_max2.9322904512149707e-06
get_duckie_state_mean2.849856383794551e-06
get_duckie_state_median2.873033210697222e-06
get_duckie_state_min2.7210686625687904e-06
get_robot_state_max0.0132142145568187
get_robot_state_mean0.012198721340157209
get_robot_state_median0.012124011459001195
get_robot_state_min0.011332647885807746
get_state_dump_max0.020410148031407848
get_state_dump_mean0.017719021248479967
get_state_dump_median0.01758550823380012
get_state_dump_min0.015294920494911771
get_ui_image_max0.06388565364427908
get_ui_image_mean0.054764835115872657
get_ui_image_median0.05416108736090616
get_ui_image_min0.0468515120973992
in-drivable-lane_max3.44999999999997
in-drivable-lane_mean1.012499999999984
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.53710913592516, "get_ui_image": 0.05374435481183436, "step_physics": 0.224322155850019, "survival_time": 59.99999999999873, "driven_lanedir": 27.19551493750692, "get_state_dump": 0.020410148031407848, "get_robot_state": 0.0132142145568187, "sim_render-ego0": 0.0114597273706695, "get_duckie_state": 2.8219151556442224e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.586814698583766, "agent_compute-ego0": 0.20618894058500695, "complete-iteration": 0.572006070742897, "set_robot_commands": 0.00935278347787214, "deviation-center-line": 1.8027669397207648, "driven_lanedir_consec": 27.19551493750692, "sim_compute_sim_state": 0.02557184932432405, "sim_compute_performance-ego0": 0.007518960474730531}, "LF-norm-zigzag-000-ego0": {"driven_any": 26.683901460624178, "get_ui_image": 0.06388565364427908, "step_physics": 0.29939999548620627, "survival_time": 59.99999999999873, "driven_lanedir": 25.87390914743265, "get_state_dump": 0.01748304422650111, "get_robot_state": 0.01226708950547751, "sim_render-ego0": 0.011127067743788948, "get_duckie_state": 2.9322904512149707e-06, "in-drivable-lane": 0.5999999999999659, "deviation-heading": 9.676336862644302, "agent_compute-ego0": 0.2068780500823314, "complete-iteration": 0.6598866972498453, "set_robot_commands": 0.008314710969631121, "deviation-center-line": 3.014977503645415, "driven_lanedir_consec": 25.87390914743265, "sim_compute_sim_state": 0.03325263566518207, "sim_compute_performance-ego0": 0.007047340137376873}, "LF-norm-techtrack-000-ego0": {"driven_any": 26.43615535868045, "get_ui_image": 0.054577819909977975, "step_physics": 0.2791117897240149, "survival_time": 59.99999999999873, "driven_lanedir": 24.31386873257729, "get_state_dump": 0.015294920494911771, "get_robot_state": 0.011332647885807746, "sim_render-ego0": 0.009903962765009973, "get_duckie_state": 2.7210686625687904e-06, "in-drivable-lane": 3.44999999999997, "deviation-heading": 10.48757317919206, "agent_compute-ego0": 0.0350150490283569, "complete-iteration": 0.4455357505121795, "set_robot_commands": 0.006689399207859214, "deviation-center-line": 3.440706945238377, "driven_lanedir_consec": 24.31386873257729, "sim_compute_sim_state": 0.02764714488776697, "sim_compute_performance-ego0": 0.005736178502155085}, "LF-norm-small_loop-000-ego0": {"driven_any": 26.660069571027712, "get_ui_image": 0.0468515120973992, "step_physics": 0.19583722951509475, "survival_time": 59.99999999999873, "driven_lanedir": 26.10720105081343, "get_state_dump": 0.017687972241099133, "get_robot_state": 0.01198093341252488, "sim_render-ego0": 0.01068732641221681, "get_duckie_state": 2.9241512657502213e-06, "in-drivable-lane": 0.0, "deviation-heading": 10.405272712070552, "agent_compute-ego0": 0.19862507959885164, "complete-iteration": 0.5132931660454438, "set_robot_commands": 0.00865624111756794, "deviation-center-line": 3.684422663876693, "driven_lanedir_consec": 26.10720105081343, "sim_compute_sim_state": 0.01637914000105401, "sim_compute_performance-ego0": 0.006357289273772609}}
set_robot_commands_max0.00935278347787214
set_robot_commands_mean0.008253283693232603
set_robot_commands_median0.00848547604359953
set_robot_commands_min0.006689399207859214
sim_compute_performance-ego0_max0.007518960474730531
sim_compute_performance-ego0_mean0.006664942097008774
sim_compute_performance-ego0_median0.006702314705574741
sim_compute_performance-ego0_min0.005736178502155085
sim_compute_sim_state_max0.03325263566518207
sim_compute_sim_state_mean0.025712692469581775
sim_compute_sim_state_median0.02660949710604551
sim_compute_sim_state_min0.01637914000105401
sim_render-ego0_max0.0114597273706695
sim_render-ego0_mean0.010794521072921306
sim_render-ego0_median0.01090719707800288
sim_render-ego0_min0.009903962765009973
simulation-passed1
step_physics_max0.29939999548620627
step_physics_mean0.24966779264383376
step_physics_median0.25171697278701693
step_physics_min0.19583722951509475
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6226113632Raphael Jeanmobile-segmentationaido-LFP-sim-validation350successyesreg020:07:55
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.7749999999999915
in-drivable-lane_median0.0
driven_lanedir_consec_median1.6540628337251968
deviation-center-line_median0.2397796923052304


other stats
agent_compute-ego0_max0.212291382728739
agent_compute-ego0_mean0.17876970723387642
agent_compute-ego0_median0.20492634695399303
agent_compute-ego0_min0.0929347522987807
complete-iteration_max0.6688484435385846
complete-iteration_mean0.6013916794586958
complete-iteration_median0.6081870767203243
complete-iteration_min0.5203441208555498
deviation-center-line_max0.49607251562928933
deviation-center-line_mean0.27026079948182746
deviation-center-line_min0.10541129768755962
deviation-heading_max1.397148742176067
deviation-heading_mean0.9124148710125736
deviation-heading_median0.8729895002548466
deviation-heading_min0.5065317413645336
driven_any_max2.5146002636483153
driven_any_mean1.62345917349445
driven_any_median1.7218132658317664
driven_any_min0.5356098986659518
driven_lanedir_consec_max2.4665678012704655
driven_lanedir_consec_mean1.5747255102224766
driven_lanedir_consec_min0.5242085721690475
driven_lanedir_max2.4665678012704655
driven_lanedir_mean1.5747255102224766
driven_lanedir_median1.6540628337251968
driven_lanedir_min0.5242085721690475
get_duckie_state_max0.09049097762620154
get_duckie_state_mean0.056278849074513346
get_duckie_state_median0.060557564381448374
get_duckie_state_min0.013509289908955117
get_robot_state_max0.013030629512692286
get_robot_state_mean0.01174733729436703
get_robot_state_median0.012043307189682134
get_robot_state_min0.009872105285411572
get_state_dump_max0.028484257784756748
get_state_dump_mean0.023617672630925342
get_state_dump_median0.025665889077998223
get_state_dump_min0.014654654582948175
get_ui_image_max0.06456590206065077
get_ui_image_mean0.05853844467366303
get_ui_image_median0.05960039630244914
get_ui_image_min0.05038708402910305
in-drivable-lane_max0.09999999999999964
in-drivable-lane_mean0.02499999999999991
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.2293482549599073, "get_ui_image": 0.05616442034067201, "step_physics": 0.23116918634777223, "survival_time": 5.999999999999987, "driven_lanedir": 2.122829695042795, "get_state_dump": 0.028484257784756748, "get_robot_state": 0.013030629512692286, "sim_render-ego0": 0.011256428789501349, "get_duckie_state": 0.09049097762620154, "in-drivable-lane": 0.09999999999999964, "deviation-heading": 1.397148742176067, "agent_compute-ego0": 0.0929347522987807, "complete-iteration": 0.5580047910863702, "set_robot_commands": 0.008192564830307133, "deviation-center-line": 0.35094070877799066, "driven_lanedir_consec": 2.122829695042795, "sim_compute_sim_state": 0.019824104860794445, "sim_compute_performance-ego0": 0.0062084158590017275}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5356098986659518, "get_ui_image": 0.06456590206065077, "step_physics": 0.25455744723056223, "survival_time": 2.3, "driven_lanedir": 0.5242085721690475, "get_state_dump": 0.024944772111608626, "get_robot_state": 0.011212425029024165, "sim_render-ego0": 0.008401439545002389, "get_duckie_state": 0.056547271444442423, "in-drivable-lane": 0.0, "deviation-heading": 0.5065317413645336, "agent_compute-ego0": 0.212291382728739, "complete-iteration": 0.6688484435385846, "set_robot_commands": 0.010186575828714574, "deviation-center-line": 0.12861867583247014, "driven_lanedir_consec": 0.5242085721690475, "sim_compute_sim_state": 0.020925526923321666, "sim_compute_performance-ego0": 0.004981015590911216}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.2142782767036258, "get_ui_image": 0.06303637226422627, "step_physics": 0.22985612021552193, "survival_time": 3.5499999999999954, "driven_lanedir": 1.1852959724075984, "get_state_dump": 0.026387006044387817, "get_robot_state": 0.012874189350340102, "sim_render-ego0": 0.009640074438518949, "get_duckie_state": 0.06456785731845432, "in-drivable-lane": 0.0, "deviation-heading": 0.5661313522019644, "agent_compute-ego0": 0.207333419058058, "complete-iteration": 0.6583693623542786, "set_robot_commands": 0.008851120869318644, "deviation-center-line": 0.10541129768755962, "driven_lanedir_consec": 1.1852959724075984, "sim_compute_sim_state": 0.028469459878073797, "sim_compute_performance-ego0": 0.007121026515960693}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.5146002636483153, "get_ui_image": 0.05038708402910305, "step_physics": 0.1914163054400728, "survival_time": 6.499999999999985, "driven_lanedir": 2.4665678012704655, "get_state_dump": 0.014654654582948175, "get_robot_state": 0.009872105285411572, "sim_render-ego0": 0.008848756324243909, "get_duckie_state": 0.013509289908955117, "in-drivable-lane": 0.0, "deviation-heading": 1.1798476483077287, "agent_compute-ego0": 0.20251927484992807, "complete-iteration": 0.5203441208555498, "set_robot_commands": 0.008278688401666307, "deviation-center-line": 0.49607251562928933, "driven_lanedir_consec": 2.4665678012704655, "sim_compute_sim_state": 0.014624177044584552, "sim_compute_performance-ego0": 0.006017508397575553}}
set_robot_commands_max0.010186575828714574
set_robot_commands_mean0.008877237482501663
set_robot_commands_median0.008564904635492476
set_robot_commands_min0.008192564830307133
sim_compute_performance-ego0_max0.007121026515960693
sim_compute_performance-ego0_mean0.006081991590862298
sim_compute_performance-ego0_median0.00611296212828864
sim_compute_performance-ego0_min0.004981015590911216
sim_compute_sim_state_max0.028469459878073797
sim_compute_sim_state_mean0.02096081717669361
sim_compute_sim_state_median0.020374815892058053
sim_compute_sim_state_min0.014624177044584552
sim_render-ego0_max0.011256428789501349
sim_render-ego0_mean0.00953667477431665
sim_render-ego0_median0.009244415381381429
sim_render-ego0_min0.008401439545002389
simulation-passed1
step_physics_max0.25455744723056223
step_physics_mean0.2267497648084823
step_physics_median0.2305126532816471
step_physics_min0.1914163054400728
survival_time_max6.499999999999985
survival_time_mean4.587499999999992
survival_time_min2.3
No reset possible
6225813634Raphael Jeanmobile-segmentationaido-LFV-sim-validation354abortednoreg020:07:42
KeyboardInterrupt: T [...]
KeyboardInterrupt:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 1169, in run_one
    heartbeat()
  File "/usr/local/lib/python3.8/dist-packages/duckietown_challenges_runner/runner.py", line 628, in heartbeat
    raise KeyboardInterrupt(msg_)
KeyboardInterrupt: The server told us to abort the job because: The challenge has been updated.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6225513641Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LF-sim-validation347failednoreg020:03:09
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6224613640Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LF-sim-testing348failednoreg020:03:56
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6223813729YU CHENBC Net V2aido-LF-sim-validation347successnoreg020:30:20
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median4.245814041421714
survival_time_median20.550000000000157
deviation-center-line_median0.8979199372543613
in-drivable-lane_median9.225000000000076


other stats
agent_compute-ego0_max0.09632753646435516
agent_compute-ego0_mean0.09368068409559097
agent_compute-ego0_median0.09429746713576811
agent_compute-ego0_min0.08980026564647242
complete-iteration_max0.4761873559864689
complete-iteration_mean0.4270159077514718
complete-iteration_median0.4199450292764835
complete-iteration_min0.39198621646645143
deviation-center-line_max4.02331688046553
deviation-center-line_mean1.515693839832002
deviation-center-line_min0.2436186043537546
deviation-heading_max19.328873988776504
deviation-heading_mean7.4035635228130126
deviation-heading_median4.589654388320891
deviation-heading_min1.106071325833763
driven_any_max24.956531448998955
driven_any_mean11.117350116420647
driven_any_median8.008107064330135
driven_any_min3.4966548880233663
driven_lanedir_consec_max16.604217844348764
driven_lanedir_consec_mean6.619817130097297
driven_lanedir_consec_min1.383422593196992
driven_lanedir_max16.604217844348764
driven_lanedir_mean6.619817130097297
driven_lanedir_median4.245814041421714
driven_lanedir_min1.383422593196992
get_duckie_state_max2.93946636773144e-06
get_duckie_state_mean2.829116130284486e-06
get_duckie_state_median2.811604565658955e-06
get_duckie_state_min2.753789022088595e-06
get_robot_state_max0.012504544432304766
get_robot_state_mean0.011471787120832256
get_robot_state_median0.011504982059834964
get_robot_state_min0.010372639931354326
get_state_dump_max0.02140767715837313
get_state_dump_mean0.0186378303176964
get_state_dump_median0.01863558588605689
get_state_dump_min0.015872472340298684
get_ui_image_max0.06197452708466413
get_ui_image_mean0.05217134586817256
get_ui_image_median0.05053772319681936
get_ui_image_min0.04563540999438741
in-drivable-lane_max14.949999999999642
in-drivable-lane_mean9.93749999999995
in-drivable-lane_min6.350000000000007
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.4966548880233663, "get_ui_image": 0.046710782444354185, "step_physics": 0.18005050826318486, "survival_time": 9.650000000000002, "driven_lanedir": 1.383422593196992, "get_state_dump": 0.015872472340298684, "get_robot_state": 0.010372639931354326, "sim_render-ego0": 0.009742358296187885, "get_duckie_state": 2.7651639328789464e-06, "in-drivable-lane": 6.350000000000007, "deviation-heading": 1.106071325833763, "agent_compute-ego0": 0.08980026564647242, "complete-iteration": 0.39198621646645143, "set_robot_commands": 0.00860180928535068, "deviation-center-line": 0.2436186043537546, "driven_lanedir_consec": 1.383422593196992, "sim_compute_sim_state": 0.024518867128903103, "sim_compute_performance-ego0": 0.006099759917898276}, "LF-norm-zigzag-000-ego0": {"driven_any": 8.358746604832792, "get_ui_image": 0.06197452708466413, "step_physics": 0.22653006745255705, "survival_time": 21.850000000000176, "driven_lanedir": 4.946478213636219, "get_state_dump": 0.02140767715837313, "get_robot_state": 0.012504544432304766, "sim_render-ego0": 0.010152849432540266, "get_duckie_state": 2.753789022088595e-06, "in-drivable-lane": 8.85000000000008, "deviation-heading": 5.1519261486046375, "agent_compute-ego0": 0.09491199282206356, "complete-iteration": 0.4761873559864689, "set_robot_commands": 0.009377881816533058, "deviation-center-line": 0.9831246926762204, "driven_lanedir_consec": 4.946478213636219, "sim_compute_sim_state": 0.03230716703144927, "sim_compute_performance-ego0": 0.006804921311330578}, "LF-norm-techtrack-000-ego0": {"driven_any": 7.657467523827478, "get_ui_image": 0.05436466394928453, "step_physics": 0.2144593667489877, "survival_time": 19.25000000000014, "driven_lanedir": 3.5451498692072096, "get_state_dump": 0.01841926265874675, "get_robot_state": 0.010995905016370388, "sim_render-ego0": 0.009987805173804725, "get_duckie_state": 2.93946636773144e-06, "in-drivable-lane": 9.600000000000072, "deviation-heading": 4.027382628037145, "agent_compute-ego0": 0.09632753646435516, "complete-iteration": 0.4462278599566129, "set_robot_commands": 0.007095346796697904, "deviation-center-line": 0.8127151818325022, "driven_lanedir_consec": 3.5451498692072096, "sim_compute_sim_state": 0.02793753579490543, "sim_compute_performance-ego0": 0.006420901402290621}, "LF-norm-small_loop-000-ego0": {"driven_any": 24.956531448998955, "get_ui_image": 0.04563540999438741, "step_physics": 0.17881523977211372, "survival_time": 59.99999999999873, "driven_lanedir": 16.604217844348764, "get_state_dump": 0.018851909113367035, "get_robot_state": 0.012014059103299538, "sim_render-ego0": 0.01083697228507138, "get_duckie_state": 2.8580451984389637e-06, "in-drivable-lane": 14.949999999999642, "deviation-heading": 19.328873988776504, "agent_compute-ego0": 0.09368294144947265, "complete-iteration": 0.3936621985963541, "set_robot_commands": 0.008862987942342257, "deviation-center-line": 4.02331688046553, "driven_lanedir_consec": 16.604217844348764, "sim_compute_sim_state": 0.01744266830812783, "sim_compute_performance-ego0": 0.007302516703800198}}
set_robot_commands_max0.009377881816533058
set_robot_commands_mean0.008484506460230974
set_robot_commands_median0.008732398613846469
set_robot_commands_min0.007095346796697904
sim_compute_performance-ego0_max0.007302516703800198
sim_compute_performance-ego0_mean0.0066570248338299175
sim_compute_performance-ego0_median0.006612911356810599
sim_compute_performance-ego0_min0.006099759917898276
sim_compute_sim_state_max0.03230716703144927
sim_compute_sim_state_mean0.02555155956584641
sim_compute_sim_state_median0.026228201461904264
sim_compute_sim_state_min0.01744266830812783
sim_render-ego0_max0.01083697228507138
sim_render-ego0_mean0.010179996296901065
sim_render-ego0_median0.010070327303172494
sim_render-ego0_min0.009742358296187885
simulation-passed1
step_physics_max0.22653006745255705
step_physics_mean0.19996379555921084
step_physics_median0.19725493750608628
step_physics_min0.17881523977211372
survival_time_max59.99999999999873
survival_time_mean27.68749999999976
survival_time_min9.650000000000002
No reset possible
6222713720Frank (Chude) Qian 🇨🇦CBC Net v1aido-LFP_full-sim-validation352successnoreg020:14:30
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median6.024999999999986
in-drivable-lane_median3.849999999999987
driven_lanedir_consec_median0.8269966491210599
deviation-center-line_median0.1259313496227091


other stats
agent_compute-ego0_max0.12153128573769018
agent_compute-ego0_mean0.1051707122793856
agent_compute-ego0_median0.10072210468005838
agent_compute-ego0_min0.09770735401973546
complete-iteration_max0.646097051952532
complete-iteration_mean0.515534469512983
complete-iteration_median0.5153173330952139
complete-iteration_min0.385406159908972
deviation-center-line_max0.5383158061502973
deviation-center-line_mean0.20773386187425627
deviation-center-line_min0.040756942101309514
deviation-heading_max3.22775290105804
deviation-heading_mean1.273193909779749
deviation-heading_median0.7439285008988457
deviation-heading_min0.3771657362632638
driven_any_max4.922030254610177
driven_any_mean2.2706303440502564
driven_any_median1.8642067941486635
driven_any_min0.4320775332935221
driven_lanedir_consec_max2.9177023230435233
driven_lanedir_consec_mean1.249579910270833
driven_lanedir_consec_min0.42662401979768894
driven_lanedir_max2.9177023230435233
driven_lanedir_mean1.249579910270833
driven_lanedir_median0.8269966491210599
driven_lanedir_min0.42662401979768894
get_duckie_state_max0.0916655000887419
get_duckie_state_mean0.0662278954163305
get_duckie_state_median0.07947717807525168
get_duckie_state_min0.01429172542607673
get_robot_state_max0.01700198409045756
get_robot_state_mean0.014277460874113995
get_robot_state_median0.013923619160555996
get_robot_state_min0.01226062108488644
get_state_dump_max0.040923014343508826
get_state_dump_mean0.029725138143480084
get_state_dump_median0.02941308684946952
get_state_dump_min0.019151364531472464
get_ui_image_max0.08009786065290814
get_ui_image_mean0.061119005942003826
get_ui_image_median0.05849392126219191
get_ui_image_min0.04739032059072334
in-drivable-lane_max4.800000000000033
in-drivable-lane_mean3.1250000000000018
in-drivable-lane_min0.0
per-episodes
details{"LFP-full-loop-000-ego0": {"driven_any": 0.4320775332935221, "get_ui_image": 0.05587937957362125, "step_physics": 0.14922241160744115, "survival_time": 1.850000000000001, "driven_lanedir": 0.42662401979768894, "get_state_dump": 0.02886087643472772, "get_robot_state": 0.015444354007118627, "sim_render-ego0": 0.01221323013305664, "get_duckie_state": 0.0916655000887419, "in-drivable-lane": 0.0, "deviation-heading": 0.3771657362632638, "agent_compute-ego0": 0.1018299993715788, "complete-iteration": 0.49560678005218506, "set_robot_commands": 0.01217940606568989, "deviation-center-line": 0.040756942101309514, "driven_lanedir_consec": 0.42662401979768894, "sim_compute_sim_state": 0.02240242456134997, "sim_compute_performance-ego0": 0.005642025094283254}, "LFP-full-zigzag-000-ego0": {"driven_any": 4.922030254610177, "get_ui_image": 0.08009786065290814, "step_physics": 0.2251736760622094, "survival_time": 12.30000000000004, "driven_lanedir": 2.9177023230435233, "get_state_dump": 0.040923014343508826, "get_robot_state": 0.01700198409045756, "sim_render-ego0": 0.013810944460664202, "get_duckie_state": 0.08329953741930757, "in-drivable-lane": 4.800000000000033, "deviation-heading": 3.22775290105804, "agent_compute-ego0": 0.12153128573769018, "complete-iteration": 0.646097051952532, "set_robot_commands": 0.009264271268960437, "deviation-center-line": 0.5383158061502973, "driven_lanedir_consec": 2.9177023230435233, "sim_compute_sim_state": 0.04142301671418101, "sim_compute_performance-ego0": 0.013315968185301253}, "LFP-full-techtrack-000-ego0": {"driven_any": 2.097289998344914, "get_ui_image": 0.06110846295076258, "step_physics": 0.1968984603881836, "survival_time": 6.749999999999984, "driven_lanedir": 1.0606168404099838, "get_state_dump": 0.02996529726421132, "get_robot_state": 0.01226062108488644, "sim_render-ego0": 0.01267184755381416, "get_duckie_state": 0.07565481873119578, "in-drivable-lane": 4.149999999999985, "deviation-heading": 0.8779243065232901, "agent_compute-ego0": 0.09961420998853796, "complete-iteration": 0.5350278861382428, "set_robot_commands": 0.008137748521917006, "deviation-center-line": 0.16066097541082722, "driven_lanedir_consec": 1.0606168404099838, "sim_compute_sim_state": 0.03127038829466876, "sim_compute_performance-ego0": 0.007195432396496043}, "LFP-full-small_loop-000-ego0": {"driven_any": 1.6311235899524137, "get_ui_image": 0.04739032059072334, "step_physics": 0.1485051507147673, "survival_time": 5.299999999999989, "driven_lanedir": 0.5933764578321359, "get_state_dump": 0.019151364531472464, "get_robot_state": 0.012402884313993364, "sim_render-ego0": 0.010959342261341131, "get_duckie_state": 0.01429172542607673, "in-drivable-lane": 3.5499999999999883, "deviation-heading": 0.6099326952744012, "agent_compute-ego0": 0.09770735401973546, "complete-iteration": 0.385406159908972, "set_robot_commands": 0.00907792332016419, "deviation-center-line": 0.09120172383459098, "driven_lanedir_consec": 0.5933764578321359, "sim_compute_sim_state": 0.01927279534740983, "sim_compute_performance-ego0": 0.00641079929387458}}
set_robot_commands_max0.01217940606568989
set_robot_commands_mean0.009664837294182878
set_robot_commands_median0.009171097294562312
set_robot_commands_min0.008137748521917006
sim_compute_performance-ego0_max0.013315968185301253
sim_compute_performance-ego0_mean0.008141056242488781
sim_compute_performance-ego0_median0.006803115845185312
sim_compute_performance-ego0_min0.005642025094283254
sim_compute_sim_state_max0.04142301671418101
sim_compute_sim_state_mean0.028592156229402393
sim_compute_sim_state_median0.026836406428009364
sim_compute_sim_state_min0.01927279534740983
sim_render-ego0_max0.013810944460664202
sim_render-ego0_mean0.012413841102219034
sim_render-ego0_median0.0124425388434354
sim_render-ego0_min0.010959342261341131
simulation-passed1
step_physics_max0.2251736760622094
step_physics_mean0.17994992469315035
step_physics_median0.17306043599781237
step_physics_min0.1485051507147673
survival_time_max12.30000000000004
survival_time_mean6.550000000000004
survival_time_min1.850000000000001
No reset possible
6221113703Frank (Chude) Qian 🇨🇦BC Net V2aido-LF-sim-validation347host-errornoreg020:00:44
Error while running [...]
Error while running Docker Compose:

Could not run command
│    cmd: [docker-compose, -p, reg02_c3f15323bf18-job62211-973871, up, -d]
│ stdout: ''
│  sderr: ''
│      e: Command '['docker-compose', '-p', 'reg02_c3f15323bf18-job62211-973871', 'up', '-d']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6220113702Frank (Chude) Qian 🇨🇦BC Net V2aido-LF-sim-validation347host-errornoreg020:00:41
Error while running [...]
Error while running Docker Compose:

Could not run command
│    cmd: [docker-compose, -p, reg02_c3f15323bf18-job62201-595204, up, -d]
│ stdout: ''
│  sderr: ''
│      e: Command '['docker-compose', '-p', 'reg02_c3f15323bf18-job62201-595204', 'up', '-d']' returned non-zero exit status 1.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6218913700Frank (Chude) Qian 🇨🇦BC Net V2aido-LF-sim-validation347host-errornoreg020:01:01
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 45, in init
              ||     limit_gpu_memory()
              ||   File "solution.py", line 29, in limit_gpu_memory
              ||     logical_gpus = tf.config.experimental.list_logical_devices('GPU')
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/config.py", line 439, in list_logical_devices
              ||     return context.context().list_logical_devices(device_type=device_type)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 1368, in list_logical_devices
              ||     self.ensure_initialized()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 526, in ensure_initialized
              ||     context_handle = pywrap_tfe.TFE_NewContext(opts)
              || tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 45, in init
              || |     limit_gpu_memory()
              || |   File "solution.py", line 29, in limit_gpu_memory
              || |     logical_gpus = tf.config.experimental.list_logical_devices('GPU')
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/config.py", line 439, in list_logical_devices
              || |     return context.context().list_logical_devices(device_type=device_type)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 1368, in list_logical_devices
              || |     self.ensure_initialized()
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 526, in ensure_initialized
              || |     context_handle = pywrap_tfe.TFE_NewContext(opts)
              || | tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 45, in init
              ||     limit_gpu_memory()
              ||   File "solution.py", line 29, in limit_gpu_memory
              ||     logical_gpus = tf.config.experimental.list_logical_devices('GPU')
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/config.py", line 439, in list_logical_devices
              ||     return context.context().list_logical_devices(device_type=device_type)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 1368, in list_logical_devices
              ||     self.ensure_initialized()
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 526, in ensure_initialized
              ||     context_handle = pywrap_tfe.TFE_NewContext(opts)
              || tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 45, in init
              || |     limit_gpu_memory()
              || |   File "solution.py", line 29, in limit_gpu_memory
              || |     logical_gpus = tf.config.experimental.list_logical_devices('GPU')
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/config.py", line 439, in list_logical_devices
              || |     return context.context().list_logical_devices(device_type=device_type)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 1368, in list_logical_devices
              || |     self.ensure_initialized()
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/context.py", line 526, in ensure_initialized
              || |     context_handle = pywrap_tfe.TFE_NewContext(opts)
              || | tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
              || |
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6217513693Samuel Alexandertemplate-rosaido-LF-sim-validation347successnoreg020:13:02
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median0.6170081810984177
survival_time_median12.900000000000048
deviation-center-line_median0.19867033894208289
in-drivable-lane_median10.850000000000044


other stats
agent_compute-ego0_max0.04748881500934868
agent_compute-ego0_mean0.03934346562952996
agent_compute-ego0_median0.037730363037028286
agent_compute-ego0_min0.03442432143471458
complete-iteration_max0.4058677708661115
complete-iteration_mean0.3495487716705381
complete-iteration_median0.3401206875974625
complete-iteration_min0.31208594062111594
deviation-center-line_max0.33444273289483856
deviation-center-line_mean0.20853035767774153
deviation-center-line_min0.1023380199319619
deviation-heading_max2.206274583283399
deviation-heading_mean1.108270987468961
deviation-heading_median0.9019194175941796
deviation-heading_min0.4229705314040847
driven_any_max4.484792123383148
driven_any_mean3.30912612601694
driven_any_median3.5056399493754555
driven_any_min1.7404324819336991
driven_lanedir_consec_max0.9153504279687236
driven_lanedir_consec_mean0.6158449558212293
driven_lanedir_consec_min0.31401303311935846
driven_lanedir_max0.9153504279687236
driven_lanedir_mean0.6158449558212293
driven_lanedir_median0.6170081810984177
driven_lanedir_min0.31401303311935846
get_duckie_state_max4.776174371892755e-06
get_duckie_state_mean4.401781483075568e-06
get_duckie_state_median4.4188428235311786e-06
get_duckie_state_min3.993265913347158e-06
get_robot_state_max0.018130810663042736
get_robot_state_mean0.014526047093304932
get_robot_state_median0.013501632943510372
get_robot_state_min0.012970111823156244
get_state_dump_max0.03690777394016093
get_state_dump_mean0.027004495214641253
get_state_dump_median0.02606705143782864
get_state_dump_min0.018976104042746803
get_ui_image_max0.06699116337937092
get_ui_image_mean0.05934539704714793
get_ui_image_median0.060135388414742245
get_ui_image_min0.05011964797973633
in-drivable-lane_max11.900000000000077
in-drivable-lane_mean9.425000000000036
in-drivable-lane_min4.099999999999985
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 3.749454658576709, "get_ui_image": 0.05011964797973633, "step_physics": 0.1377577816356312, "survival_time": 13.70000000000006, "driven_lanedir": 0.31401303311935846, "get_state_dump": 0.018976104042746803, "get_robot_state": 0.013989491029219196, "sim_render-ego0": 0.012703969261863016, "get_duckie_state": 4.776174371892755e-06, "in-drivable-lane": 11.75000000000005, "deviation-heading": 1.2716347528774994, "agent_compute-ego0": 0.03442432143471458, "complete-iteration": 0.31208594062111594, "set_robot_commands": 0.008991431322964755, "deviation-center-line": 0.20842715942172263, "driven_lanedir_consec": 0.31401303311935846, "sim_compute_sim_state": 0.027294653979214756, "sim_compute_performance-ego0": 0.007600339542735706}, "LF-norm-zigzag-000-ego0": {"driven_any": 4.484792123383148, "get_ui_image": 0.06570732110757323, "step_physics": 0.16585874557495117, "survival_time": 16.000000000000092, "driven_lanedir": 0.9153504279687236, "get_state_dump": 0.021219622680331316, "get_robot_state": 0.012970111823156244, "sim_render-ego0": 0.01106545486925547, "get_duckie_state": 4.493558889608888e-06, "in-drivable-lane": 11.900000000000077, "deviation-heading": 2.206274583283399, "agent_compute-ego0": 0.03724333578923781, "complete-iteration": 0.35799420659787184, "set_robot_commands": 0.008112259000261253, "deviation-center-line": 0.33444273289483856, "driven_lanedir_consec": 0.9153504279687236, "sim_compute_sim_state": 0.02873935283530167, "sim_compute_performance-ego0": 0.006852416605964256}, "LF-norm-techtrack-000-ego0": {"driven_any": 3.2618252401742023, "get_ui_image": 0.06699116337937092, "step_physics": 0.17123718909275384, "survival_time": 12.100000000000035, "driven_lanedir": 0.5575978328545987, "get_state_dump": 0.03690777394016093, "get_robot_state": 0.018130810663042736, "sim_render-ego0": 0.01524354989636582, "get_duckie_state": 3.993265913347158e-06, "in-drivable-lane": 9.950000000000037, "deviation-heading": 0.4229705314040847, "agent_compute-ego0": 0.04748881500934868, "complete-iteration": 0.4058677708661115, "set_robot_commands": 0.011419612193794408, "deviation-center-line": 0.1023380199319619, "driven_lanedir_consec": 0.5575978328545987, "sim_compute_sim_state": 0.029149501902576334, "sim_compute_performance-ego0": 0.00910089732197577}, "LF-norm-small_loop-000-ego0": {"driven_any": 1.7404324819336991, "get_ui_image": 0.054563455721911264, "step_physics": 0.1429958133136525, "survival_time": 6.749999999999984, "driven_lanedir": 0.6764185293422367, "get_state_dump": 0.030914480195325965, "get_robot_state": 0.013013774857801547, "sim_render-ego0": 0.011384387226665722, "get_duckie_state": 4.344126757453469e-06, "in-drivable-lane": 4.099999999999985, "deviation-heading": 0.5322040823108597, "agent_compute-ego0": 0.03821739028481876, "complete-iteration": 0.3222471685970531, "set_robot_commands": 0.008285012315301335, "deviation-center-line": 0.18891351846244311, "driven_lanedir_consec": 0.6764185293422367, "sim_compute_sim_state": 0.014489978551864624, "sim_compute_performance-ego0": 0.008165787248050465}}
set_robot_commands_max0.011419612193794408
set_robot_commands_mean0.009202078708080435
set_robot_commands_median0.008638221819133044
set_robot_commands_min0.008112259000261253
sim_compute_performance-ego0_max0.00910089732197577
sim_compute_performance-ego0_mean0.00792986017968155
sim_compute_performance-ego0_median0.007883063395393086
sim_compute_performance-ego0_min0.006852416605964256
sim_compute_sim_state_max0.029149501902576334
sim_compute_sim_state_mean0.024918371817239347
sim_compute_sim_state_median0.028017003407258212
sim_compute_sim_state_min0.014489978551864624
sim_render-ego0_max0.01524354989636582
sim_render-ego0_mean0.012599340313537506
sim_render-ego0_median0.012044178244264368
sim_render-ego0_min0.01106545486925547
simulation-passed1
step_physics_max0.17123718909275384
step_physics_mean0.15446238240424717
step_physics_median0.15442727944430185
step_physics_min0.1377577816356312
survival_time_max16.000000000000092
survival_time_mean12.137500000000044
survival_time_min6.749999999999984
No reset possible
6215313520András Kalapos 🇭🇺real-v1.0-3092-363aido-LF-sim-validation347successnoreg021:00:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.748751507312075
survival_time_median59.99999999999873
deviation-center-line_median2.5724207885512937
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.03913137934587083
agent_compute-ego0_mean0.03757990275096337
agent_compute-ego0_median0.0377152597378136
agent_compute-ego0_min0.03575771218235546
complete-iteration_max0.4954770663497251
complete-iteration_mean0.42139782317969127
complete-iteration_median0.4151885262337652
complete-iteration_min0.3597371739015095
deviation-center-line_max2.7523743463120263
deviation-center-line_mean2.587040772902735
deviation-center-line_min2.450947168196327
deviation-heading_max8.420386860169318
deviation-heading_mean7.816185866648665
deviation-heading_median8.037323070462028
deviation-heading_min6.769710465501283
driven_any_max31.329441138588056
driven_any_mean29.58887695113551
driven_any_median29.055947730000703
driven_any_min28.91417120595259
driven_lanedir_consec_max31.05287959229932
driven_lanedir_consec_mean29.282881041952407
driven_lanedir_consec_min28.581141560886156
driven_lanedir_max31.05287959229932
driven_lanedir_mean29.282881041952407
driven_lanedir_median28.748751507312075
driven_lanedir_min28.581141560886156
get_duckie_state_max3.0434598136603286e-06
get_duckie_state_mean2.9487177096834586e-06
get_duckie_state_median2.9533332233921276e-06
get_duckie_state_min2.8447445782892514e-06
get_robot_state_max0.014487023357546994
get_robot_state_mean0.013358195258814726
get_robot_state_median0.01311044629467814
get_robot_state_min0.01272486508835563
get_state_dump_max0.024856955681514185
get_state_dump_mean0.021683288900977267
get_state_dump_median0.022086802966191706
get_state_dump_min0.017702593990011478
get_ui_image_max0.06728830563833474
get_ui_image_mean0.05721263151978771
get_ui_image_median0.05637995418561289
get_ui_image_min0.04880231206959034
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 31.329441138588056, "get_ui_image": 0.05518793920791715, "step_physics": 0.1999310492278138, "survival_time": 59.99999999999873, "driven_lanedir": 31.05287959229932, "get_state_dump": 0.024856955681514185, "get_robot_state": 0.014487023357546994, "sim_render-ego0": 0.012931759411051909, "get_duckie_state": 2.871147301870024e-06, "in-drivable-lane": 0.0, "deviation-heading": 6.769710465501283, "agent_compute-ego0": 0.03913137934587083, "complete-iteration": 0.39090350009718106, "set_robot_commands": 0.009187968942545336, "deviation-center-line": 2.4622569895758932, "driven_lanedir_consec": 31.05287959229932, "sim_compute_sim_state": 0.026123258096788646, "sim_compute_performance-ego0": 0.008850842490978384}, "LF-norm-zigzag-000-ego0": {"driven_any": 28.91417120595259, "get_ui_image": 0.06728830563833474, "step_physics": 0.2883200500529573, "survival_time": 59.99999999999873, "driven_lanedir": 28.581141560886156, "get_state_dump": 0.024400286233792395, "get_robot_state": 0.01341988105361011, "sim_render-ego0": 0.011095571478240992, "get_duckie_state": 2.8447445782892514e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.073324683850567, "agent_compute-ego0": 0.03756093303924993, "complete-iteration": 0.4954770663497251, "set_robot_commands": 0.009727548103745437, "deviation-center-line": 2.6825845875266947, "driven_lanedir_consec": 28.581141560886156, "sim_compute_sim_state": 0.03615273246161646, "sim_compute_performance-ego0": 0.007297154965746115}, "LF-norm-techtrack-000-ego0": {"driven_any": 29.13106880151358, "get_ui_image": 0.05757196916330863, "step_physics": 0.25829340754500235, "survival_time": 59.99999999999873, "driven_lanedir": 28.791485027955627, "get_state_dump": 0.017702593990011478, "get_robot_state": 0.012801011535746172, "sim_render-ego0": 0.010605248086756214, "get_duckie_state": 3.035519144914231e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.420386860169318, "agent_compute-ego0": 0.03575771218235546, "complete-iteration": 0.43947355237034935, "set_robot_commands": 0.008181052045163069, "deviation-center-line": 2.7523743463120263, "driven_lanedir_consec": 28.791485027955627, "sim_compute_sim_state": 0.03146972684042341, "sim_compute_performance-ego0": 0.006863809842848957}, "LF-norm-small_loop-000-ego0": {"driven_any": 28.98082665848783, "get_ui_image": 0.04880231206959034, "step_physics": 0.19538009275901724, "survival_time": 59.99999999999873, "driven_lanedir": 28.706017986668524, "get_state_dump": 0.01977331969859102, "get_robot_state": 0.01272486508835563, "sim_render-ego0": 0.011258089175132989, "get_duckie_state": 3.0434598136603286e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.00132145707349, "agent_compute-ego0": 0.03786958643637728, "complete-iteration": 0.3597371739015095, "set_robot_commands": 0.008976066837104333, "deviation-center-line": 2.450947168196327, "driven_lanedir_consec": 28.706017986668524, "sim_compute_sim_state": 0.017841232904883646, "sim_compute_performance-ego0": 0.006894218435295417}}
set_robot_commands_max0.009727548103745437
set_robot_commands_mean0.009018158982139543
set_robot_commands_median0.009082017889824837
set_robot_commands_min0.008181052045163069
sim_compute_performance-ego0_max0.008850842490978384
sim_compute_performance-ego0_mean0.007476506433717218
sim_compute_performance-ego0_median0.007095686700520766
sim_compute_performance-ego0_min0.006863809842848957
sim_compute_sim_state_max0.03615273246161646
sim_compute_sim_state_mean0.02789673757592804
sim_compute_sim_state_median0.02879649246860603
sim_compute_sim_state_min0.017841232904883646
sim_render-ego0_max0.012931759411051909
sim_render-ego0_mean0.011472667037795526
sim_render-ego0_median0.011176830326686989
sim_render-ego0_min0.010605248086756214
simulation-passed1
step_physics_max0.2883200500529573
step_physics_mean0.23548114989619767
step_physics_median0.22911222838640807
step_physics_min0.19538009275901724
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6215113533András Kalapos 🇭🇺real-v1.0-3092-363aido-LFV_multi-sim-validation356failednoreg020:02:01
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6215013533András Kalapos 🇭🇺real-v1.0-3092-363aido-LFV_multi-sim-validation356failednoreg020:01:51
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6214913533András Kalapos 🇭🇺real-v1.0-3092-363aido-LFV_multi-sim-validation356failednoreg020:01:50
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6214813533András Kalapos 🇭🇺real-v1.0-3092-363aido-LFV_multi-sim-validation356failednoreg020:01:53
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego2" aborted with the following error:

error in ego2 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213613543András Kalapos 🇭🇺3090aido-LFV-sim-validation354successnoreg020:34:21
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median9.025000000000002
in-drivable-lane_median0.0
driven_lanedir_consec_median3.765417408969266
deviation-center-line_median0.36259646839214565


other stats
agent_compute-ego0_max0.0432952344417572
agent_compute-ego0_mean0.03902076261230731
agent_compute-ego0_median0.03777982799517668
agent_compute-ego0_min0.03722816001711868
agent_compute-npc0_max0.0936167190472285
agent_compute-npc0_mean0.07722077011203592
agent_compute-npc0_median0.07573362054556526
agent_compute-npc0_min0.06379912030978466
complete-iteration_max1.918228191970497
complete-iteration_mean1.2441419248635497
complete-iteration_median1.2319440043430714
complete-iteration_min0.5944514987975593
deviation-center-line_max0.8497875756869387
deviation-center-line_mean0.4421434052738346
deviation-center-line_min0.19359310862410825
deviation-heading_max3.5353859709105926
deviation-heading_mean1.8054298316024509
deviation-heading_median1.4519620883572426
deviation-heading_min0.7824091787847257
driven_any_max10.419551457414084
driven_any_mean5.1503998131731725
driven_any_median3.845621726253982
driven_any_min2.4908043427706414
driven_lanedir_consec_max10.242822595198248
driven_lanedir_consec_mean5.059628727761386
driven_lanedir_consec_min2.464857497908768
driven_lanedir_max10.242822595198248
driven_lanedir_mean5.059628727761386
driven_lanedir_median3.765417408969266
driven_lanedir_min2.464857497908768
get_duckie_state_max4.154112603929308e-06
get_duckie_state_mean3.7719633707077625e-06
get_duckie_state_median3.869278228305756e-06
get_duckie_state_min3.195184422290231e-06
get_robot_state_max0.06537755894404586
get_robot_state_mean0.04886912699610234
get_robot_state_median0.05299806583508328
get_robot_state_min0.024102817370196965
get_state_dump_max0.04116136166784498
get_state_dump_mean0.03221720039862498
get_state_dump_median0.03322780376015153
get_state_dump_min0.021251832406351884
get_ui_image_max0.09430213487276468
get_ui_image_mean0.07914698272201437
get_ui_image_median0.08347025708702238
get_ui_image_min0.05534528184124804
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFV-norm-loop-000-ego0": {"driven_any": 4.657202210694511, "get_ui_image": 0.07525447732237377, "step_physics": 0.45129919814192543, "survival_time": 10.90000000000002, "driven_lanedir": 4.52550601628898, "get_state_dump": 0.03082211376869515, "get_robot_state": 0.05044334773059305, "sim_render-ego0": 0.010409611000862296, "sim_render-npc0": 0.011362912992364195, "sim_render-npc1": 0.010108332655745554, "sim_render-npc2": 0.011254615435317228, "get_duckie_state": 3.7961898873385777e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.0911957738022418, "agent_compute-ego0": 0.03785454628130072, "agent_compute-npc0": 0.07526403583892405, "agent_compute-npc1": 0.07237573628011904, "agent_compute-npc2": 0.09096466787329548, "complete-iteration": 1.0389756470510405, "set_robot_commands": 0.008169352736102935, "deviation-center-line": 0.4606236571753992, "driven_lanedir_consec": 4.52550601628898, "sim_compute_sim_state": 0.05072907334593333, "sim_compute_performance-ego0": 0.0069400785176177, "sim_compute_performance-npc0": 0.0067312902511527, "sim_compute_performance-npc1": 0.007877561055361953, "sim_compute_performance-npc2": 0.007360495388780011}, "LFV-norm-zigzag-000-ego0": {"driven_any": 10.419551457414084, "get_ui_image": 0.09430213487276468, "step_physics": 1.127491444926108, "survival_time": 23.200000000000195, "driven_lanedir": 10.242822595198248, "get_state_dump": 0.03563349375160792, "get_robot_state": 0.06537755894404586, "sim_render-ego0": 0.01124402323076802, "sim_render-npc0": 0.011653690440680393, "sim_render-npc1": 0.013346087035312449, "sim_render-npc2": 0.011884093540970996, "sim_render-npc3": 0.011757417904433383, "get_duckie_state": 3.942366569272933e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.5353859709105926, "agent_compute-ego0": 0.03770510970905263, "agent_compute-npc0": 0.07620320525220646, "agent_compute-npc1": 0.07688631652503886, "agent_compute-npc2": 0.08179335286540369, "agent_compute-npc3": 0.0742281247210759, "complete-iteration": 1.918228191970497, "set_robot_commands": 0.008699169466572423, "deviation-center-line": 0.8497875756869387, "driven_lanedir_consec": 10.242822595198248, "sim_compute_sim_state": 0.10797091863488638, "sim_compute_performance-ego0": 0.007210238261889386, "sim_compute_performance-npc0": 0.006600552220498362, "sim_compute_performance-npc1": 0.007159324358868343, "sim_compute_performance-npc2": 0.006549871608775149, "sim_compute_performance-npc3": 0.006828616255073137}, "LFV-norm-techtrack-000-ego0": {"driven_any": 3.034041241813453, "get_ui_image": 0.091686036851671, "step_physics": 0.5993329998519685, "survival_time": 7.149999999999983, "driven_lanedir": 3.0053288016495516, "get_state_dump": 0.04116136166784498, "get_robot_state": 0.0555527839395735, "sim_render-ego0": 0.011618543002340527, "sim_render-npc0": 0.01076247129175398, "sim_render-npc1": 0.012304662002457513, "sim_render-npc2": 0.010436856084399752, "sim_render-npc3": 0.012486489282713996, "get_duckie_state": 4.154112603929308e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.7824091787847257, "agent_compute-ego0": 0.0432952344417572, "agent_compute-npc0": 0.0936167190472285, "agent_compute-npc1": 0.08734687666098277, "agent_compute-npc2": 0.08740336034033033, "agent_compute-npc3": 0.09314467675156064, "complete-iteration": 1.4249123616351025, "set_robot_commands": 0.009884716735945808, "deviation-center-line": 0.26456927960889215, "driven_lanedir_consec": 3.0053288016495516, "sim_compute_sim_state": 0.1014891349607044, "sim_compute_performance-ego0": 0.006659484571880764, "sim_compute_performance-npc0": 0.006586997045411004, "sim_compute_performance-npc1": 0.007951706647872925, "sim_compute_performance-npc2": 0.005752301878399319, "sim_compute_performance-npc3": 0.0058842963642544216}, "LFV-norm-small_loop-000-ego0": {"driven_any": 2.4908043427706414, "get_ui_image": 0.05534528184124804, "step_physics": 0.3144517470532515, "survival_time": 6.299999999999986, "driven_lanedir": 2.464857497908768, "get_state_dump": 0.021251832406351884, "get_robot_state": 0.024102817370196965, "sim_render-ego0": 0.01165673676438219, "sim_render-npc0": 0.010453368735125684, "get_duckie_state": 3.195184422290231e-06, "in-drivable-lane": 0.0, "deviation-heading": 0.8127284029122434, "agent_compute-ego0": 0.03722816001711868, "agent_compute-npc0": 0.06379912030978466, "complete-iteration": 0.5944514987975593, "set_robot_commands": 0.009400422178854154, "deviation-center-line": 0.19359310862410825, "driven_lanedir_consec": 2.464857497908768, "sim_compute_sim_state": 0.02543041086572362, "sim_compute_performance-ego0": 0.006614885930939922, "sim_compute_performance-npc0": 0.006523962095966489}}
set_robot_commands_max0.009884716735945808
set_robot_commands_mean0.00903841527936883
set_robot_commands_median0.00904979582271329
set_robot_commands_min0.008169352736102935
sim_compute_performance-ego0_max0.007210238261889386
sim_compute_performance-ego0_mean0.006856171820581944
sim_compute_performance-ego0_median0.006799781544749232
sim_compute_performance-ego0_min0.006614885930939922
sim_compute_performance-npc0_max0.0067312902511527
sim_compute_performance-npc0_mean0.006610700403257139
sim_compute_performance-npc0_median0.006593774632954683
sim_compute_performance-npc0_min0.006523962095966489
sim_compute_sim_state_max0.10797091863488638
sim_compute_sim_state_mean0.07140488445181194
sim_compute_sim_state_median0.07610910415331885
sim_compute_sim_state_min0.02543041086572362
sim_render-ego0_max0.01165673676438219
sim_render-ego0_mean0.011232228499588258
sim_render-ego0_median0.011431283116554274
sim_render-ego0_min0.010409611000862296
sim_render-npc0_max0.011653690440680393
sim_render-npc0_mean0.011058110864981062
sim_render-npc0_median0.01106269214205909
sim_render-npc0_min0.010453368735125684
simulation-passed1
step_physics_max1.127491444926108
step_physics_mean0.6231438474933133
step_physics_median0.525316098996947
step_physics_min0.3144517470532515
survival_time_max23.200000000000195
survival_time_mean11.887500000000044
survival_time_min6.299999999999986
No reset possible
6213513544András Kalapos 🇭🇺3090aido-LFVI-sim-testing360failednoreg020:02:56
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213413544András Kalapos 🇭🇺3090aido-LFVI-sim-testing360failednoreg020:02:07
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213313544András Kalapos 🇭🇺3090aido-LFVI-sim-testing360failednoreg020:02:07
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213213545András Kalapos 🇭🇺3090aido-LFVI-sim-validation359failednoreg020:02:22
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 126, in <module>
              ||     main()
              ||   File "solution.py", line 122, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 126, in <module>
              || |     main()
              || |   File "solution.py", line 122, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213113546András Kalapos 🇭🇺3090aido-LFVI_multi-sim-validation365failednoreg020:03:24
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6213013546András Kalapos 🇭🇺3090aido-LFVI_multi-sim-validation365failednoreg020:03:28
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212913547András Kalapos 🇭🇺3090aido-LFV_multi-sim-testing357failednoreg020:03:45
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212813548András Kalapos 🇭🇺3090aido-LFV_multi-sim-validation356failednoreg020:03:38
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212713548András Kalapos 🇭🇺3090aido-LFV_multi-sim-validation356failednoreg020:03:28
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212613549András Kalapos 🇭🇺real-v0.9-3092-363aido-LF-sim-testing348failednoreg020:01:16
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212513549András Kalapos 🇭🇺real-v0.9-3092-363aido-LF-sim-testing348failednoreg020:01:12
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212413549András Kalapos 🇭🇺real-v0.9-3092-363aido-LF-sim-testing348failednoreg020:01:14
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6212313549András Kalapos 🇭🇺real-v0.9-3092-363aido-LF-sim-testing348failednoreg020:01:15
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego0" aborted with the following error:

error in ego0 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              ||     return fn(*args)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              ||     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              ||     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || During handling of the above exception, another exception occurred:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              ||     return session_or_none.run(symbolic_out[0], feed_dict)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              ||     result = self._run(None, fetches, feed_dict, options_ptr,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              ||     results = self._do_run(handle, final_targets, final_fetches,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              ||     return self._do_call(_run_fn, feeds, fetches, targets, options,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              ||     raise type(e)(node_def, op, message)
              || tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              ||   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              ||   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || 	 [[default_policy/strided_slice_1/_3]]
              || 0 successful operations.
              || 0 derived errors ignored.
              ||
              || Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              ||   File "solution.py", line 127, in <module>
              ||     main()
              ||   File "solution.py", line 123, in main
              ||     wrap_direct(node=node, protocol=protocol)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              ||     run_loop(node, protocol, args)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              ||     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              ||   File "/submission/model.py", line 55, in __init__
              ||     self.model = PPOTrainer(config=config["rllib_config"])
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              ||     Trainer.__init__(self, config, env, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              ||     super().__init__(config, logger_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              ||     self._setup(copy.deepcopy(self.config))
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              ||     self._init(self.config, self.env_creator)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              ||     self.workers = self._make_workers(env_creator, self._policy,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              ||     return WorkerSet(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              ||     self._local_worker = self._make_worker(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              ||     worker = cls(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              ||     self._build_policy_map(policy_dict, policy_config)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              ||     policy_map[name] = cls(obs_space, act_space, merged_conf)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              ||     DynamicTFPolicy.__init__(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              ||     self._initialize_loss()
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              ||     postprocessed_batch = self.postprocess_trajectory(
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              ||     return postprocess_fn(self, sample_batch, other_agent_batches,
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              ||     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              ||     symbolic_out[0] = fn(*placeholders)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              ||     model_out, _ = self.model({
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              ||     res = self.forward(restored, state or [], seq_lens)
              ||   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              ||     model_out, self._value_out = self.base_model(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              ||     return self._run_internal_graph(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              ||     outputs = node.layer(*args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              ||     outputs = call_fn(cast_inputs, *args, **kwargs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              ||     return self.activation(outputs)
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              ||     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              ||     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              ||     ret = Operation(
              ||   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              ||     self._traceback = tf_stack.extract_stack()
              ||
              ||
              || The above exception was the direct cause of the following exception:
              ||
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 339, in loop
              ||     raise Exception(msg) from e
              || Exception: Exception while calling the node's init() function.
              ||
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1365, in _do_call
              || |     return fn(*args)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
              || |     return self._call_tf_sessionrun(options, feed_dict, fetch_list,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
              || |     return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[{{node default_policy/functional_1_1/conv_value_1/Relu}}]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | During handling of the above exception, another exception occurred:
              || |
              || | Traceback (most recent call last):
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 86, in call
              || |     return session_or_none.run(symbolic_out[0], feed_dict)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 957, in run
              || |     result = self._run(None, fetches, feed_dict, options_ptr,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1180, in _run
              || |     results = self._do_run(handle, final_targets, final_fetches,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1358, in _do_run
              || |     return self._do_call(_run_fn, feeds, fetches, targets, options,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1384, in _do_call
              || |     raise type(e)(node_def, op, message)
              || | tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
              || |   (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || |   (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
              || | 	 [[node default_policy/functional_1_1/conv_value_1/Relu (defined at /usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py:103) ]]
              || | 	 [[default_policy/strided_slice_1/_3]]
              || | 0 successful operations.
              || | 0 derived errors ignored.
              || |
              || | Original stack trace for 'default_policy/functional_1_1/conv_value_1/Relu':
              || |   File "solution.py", line 127, in <module>
              || |     main()
              || |   File "solution.py", line 123, in main
              || |     wrap_direct(node=node, protocol=protocol)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 24, in wrap_direct
              || |     run_loop(node, protocol, args)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 243, in run_loop
              || |     loop(node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin, fo_desc=fout)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              || |     call_if_fun_exists(node, "init", context=context_data)
              || |   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              || |     f(**kwargs)
              || |   File "solution.py", line 29, in init
              || |     self.model = RLlibModel(SEED,experiment_idx=0,checkpoint_idx=0,logger=context)
              || |   File "/submission/model.py", line 55, in __init__
              || |     self.model = PPOTrainer(config=config["rllib_config"])
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__
              || |     Trainer.__init__(self, config, env, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 455, in __init__
              || |     super().__init__(config, logger_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 174, in __init__
              || |     self._setup(copy.deepcopy(self.config))
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 596, in _setup
              || |     self._init(self.config, self.env_creator)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 115, in _init
              || |     self.workers = self._make_workers(env_creator, self._policy,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 662, in _make_workers
              || |     return WorkerSet(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 61, in __init__
              || |     self._local_worker = self._make_worker(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 237, in _make_worker
              || |     worker = cls(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in __init__
              || |     self._build_policy_map(policy_dict, policy_config)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
              || |     policy_map[name] = cls(obs_space, act_space, merged_conf)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 129, in __init__
              || |     DynamicTFPolicy.__init__(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 237, in __init__
              || |     self._initialize_loss()
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/dynamic_tf_policy.py", line 324, in _initialize_loss
              || |     postprocessed_batch = self.postprocess_trajectory(
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/tf_policy_template.py", line 155, in postprocess_trajectory
              || |     return postprocess_fn(self, sample_batch, other_agent_batches,
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 182, in postprocess_ppo_gae
              || |     last_r = policy._value(sample_batch[SampleBatch.NEXT_OBS][-1],
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/tf_ops.py", line 84, in call
              || |     symbolic_out[0] = fn(*placeholders)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/ppo/ppo_tf_policy.py", line 235, in value
              || |     model_out, _ = self.model({
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 150, in __call__
              || |     res = self.forward(restored, state or [], seq_lens)
              || |   File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/tf/visionnet_v2.py", line 103, in forward
              || |     model_out, self._value_out = self.base_model(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 385, in call
              || |     return self._run_internal_graph(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
              || |     outputs = node.layer(*args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
              || |     outputs = call_fn(cast_inputs, *args, **kwargs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 269, in call
              || |     return self.activation(outputs)
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 10435, in relu
              || |     _, _, _op, _outputs = _op_def_library._apply_op_helper(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
              || |     op = g._create_op_internal(op_type_name, inputs, dtypes=None,
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
              || |     ret = Operation(
              || |   File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
              || |     self._traceback = tf_stack.extract_stack()
              || |
              || |
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 249, in main
    raise InvalidSubmission(msg) from e
duckietown_challenges.exceptions.InvalidSubmission: Getting agent protocol
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6211813556András Kalapos 🇭🇺real-v0.9-3092-363aido-LFP-sim-validation350successnoreg020:07:43
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median5.324999999999989
in-drivable-lane_median0.0
driven_lanedir_consec_median1.9304158564653051
deviation-center-line_median0.2355752792466956


other stats
agent_compute-ego0_max0.04225463578195283
agent_compute-ego0_mean0.03684203977303549
agent_compute-ego0_median0.0370086540310406
agent_compute-ego0_min0.03109621524810791
complete-iteration_max0.517383686219803
complete-iteration_mean0.4580368870492344
complete-iteration_median0.4770187891351766
complete-iteration_min0.3607262837067815
deviation-center-line_max0.305290021708849
deviation-center-line_mean0.24418797361129485
deviation-center-line_min0.2003113142429392
deviation-heading_max0.8575584224474907
deviation-heading_mean0.69729156439083
deviation-heading_median0.7205211726396921
deviation-heading_min0.4905654898364451
driven_any_max2.4729798089162323
driven_any_mean1.7407682326336364
driven_any_median1.9493202435208667
driven_any_min0.5914526345765804
driven_lanedir_consec_max2.454537823955258
driven_lanedir_consec_mean1.6775960174849835
driven_lanedir_consec_min0.395014533054066
driven_lanedir_max2.454537823955258
driven_lanedir_mean1.6775960174849835
driven_lanedir_median1.9304158564653051
driven_lanedir_min0.395014533054066
get_duckie_state_max0.0951138714264179
get_duckie_state_mean0.06400737100948665
get_duckie_state_median0.07337351035590124
get_duckie_state_min0.014168591899726229
get_robot_state_max0.01529860702054254
get_robot_state_mean0.013785341295493192
get_robot_state_median0.014831216326656278
get_robot_state_min0.010180325508117675
get_state_dump_max0.031137901408071735
get_state_dump_mean0.028023325487432513
get_state_dump_median0.028027725257096243
get_state_dump_min0.024899950027465825
get_ui_image_max0.06504598771682893
get_ui_image_mean0.05827287086092431
get_ui_image_median0.05956504657350738
get_ui_image_min0.04891540257985355
in-drivable-lane_max0.40000000000000024
in-drivable-lane_mean0.10000000000000006
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.1554502635854904, "get_ui_image": 0.0544382539288751, "step_physics": 0.18710552412888096, "survival_time": 5.749999999999988, "driven_lanedir": 2.136280444030911, "get_state_dump": 0.0287810152974622, "get_robot_state": 0.01529860702054254, "sim_render-ego0": 0.011683213299718396, "get_duckie_state": 0.0951138714264179, "in-drivable-lane": 0.0, "deviation-heading": 0.7490430500128196, "agent_compute-ego0": 0.03756968111827456, "complete-iteration": 0.4683130671238077, "set_robot_commands": 0.008381074872510186, "deviation-center-line": 0.24107541612694527, "driven_lanedir_consec": 2.136280444030911, "sim_compute_sim_state": 0.021642742485835635, "sim_compute_performance-ego0": 0.008071800758098734}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.5914526345765804, "get_ui_image": 0.06469183921813965, "step_physics": 0.23394334316253665, "survival_time": 2.4499999999999993, "driven_lanedir": 0.395014533054066, "get_state_dump": 0.024899950027465825, "get_robot_state": 0.010180325508117675, "sim_render-ego0": 0.007971611022949219, "get_duckie_state": 0.0715959119796753, "in-drivable-lane": 0.40000000000000024, "deviation-heading": 0.4905654898364451, "agent_compute-ego0": 0.03109621524810791, "complete-iteration": 0.4857245111465454, "set_robot_commands": 0.006753568649291992, "deviation-center-line": 0.23007514236644588, "driven_lanedir_consec": 0.395014533054066, "sim_compute_sim_state": 0.0284035062789917, "sim_compute_performance-ego0": 0.005969390869140625}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.7431902234562429, "get_ui_image": 0.06504598771682893, "step_physics": 0.23237814084447997, "survival_time": 4.899999999999991, "driven_lanedir": 1.7245512688996991, "get_state_dump": 0.02727443521673029, "get_robot_state": 0.015264101702757556, "sim_render-ego0": 0.011373105675283103, "get_duckie_state": 0.07515110873212719, "in-drivable-lane": 0.0, "deviation-heading": 0.6919992952665646, "agent_compute-ego0": 0.04225463578195283, "complete-iteration": 0.517383686219803, "set_robot_commands": 0.008816162745157877, "deviation-center-line": 0.2003113142429392, "driven_lanedir_consec": 1.7245512688996991, "sim_compute_sim_state": 0.030932662462947343, "sim_compute_performance-ego0": 0.008675250140103426}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.4729798089162323, "get_ui_image": 0.04891540257985355, "step_physics": 0.1691152430672682, "survival_time": 6.499999999999985, "driven_lanedir": 2.454537823955258, "get_state_dump": 0.031137901408071735, "get_robot_state": 0.014398330950555, "sim_render-ego0": 0.009763479232788086, "get_duckie_state": 0.014168591899726229, "in-drivable-lane": 0.0, "deviation-heading": 0.8575584224474907, "agent_compute-ego0": 0.036447626943806655, "complete-iteration": 0.3607262837067815, "set_robot_commands": 0.007558959131022446, "deviation-center-line": 0.305290021708849, "driven_lanedir_consec": 2.454537823955258, "sim_compute_sim_state": 0.021652703976813164, "sim_compute_performance-ego0": 0.0073542449310535695}}
set_robot_commands_max0.008816162745157877
set_robot_commands_mean0.007877441349495626
set_robot_commands_median0.007970017001766316
set_robot_commands_min0.006753568649291992
sim_compute_performance-ego0_max0.008675250140103426
sim_compute_performance-ego0_mean0.007517671674599088
sim_compute_performance-ego0_median0.007713022844576152
sim_compute_performance-ego0_min0.005969390869140625
sim_compute_sim_state_max0.030932662462947343
sim_compute_sim_state_mean0.02565790380114696
sim_compute_sim_state_median0.025028105127902434
sim_compute_sim_state_min0.021642742485835635
sim_render-ego0_max0.011683213299718396
sim_render-ego0_mean0.0101978523076847
sim_render-ego0_median0.010568292454035596
sim_render-ego0_min0.007971611022949219
simulation-passed1
step_physics_max0.23394334316253665
step_physics_mean0.20563556280079143
step_physics_median0.2097418324866805
step_physics_min0.1691152430672682
survival_time_max6.499999999999985
survival_time_mean4.899999999999991
survival_time_min2.4499999999999993
No reset possible
6211613563András Kalapos 🇭🇺real-v0.9-3092-363aido-LFV_multi-sim-validation356errornoreg020:05:39
InvalidEvaluator: Tr [...]
InvalidEvaluator:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 297, in read_reply
    raise ExternalTimeout(msg) from None
zuper_nodes.structures.ExternalTimeout: Timeout of 120 violated while waiting for None.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 383, in main
    raise dc.InvalidEvaluator(msg) from e
duckietown_challenges.exceptions.InvalidEvaluator: Anomalous error while running episodes:
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6211113567Márton Tim 🇭🇺3626aido-LFI-full-sim-validation363successnoreg020:09:52
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median11.300000000000036
in-drivable-lane_median1.0999999999999992
driven_lanedir_consec_median4.5326961371923264
deviation-center-line_median0.7098198987918333


other stats
agent_compute-ego0_max0.08518331465513809
agent_compute-ego0_mean0.08473557335502443
agent_compute-ego0_median0.08473557335502443
agent_compute-ego0_min0.08428783205491078
complete-iteration_max0.4976913307024085
complete-iteration_mean0.4900816364247653
complete-iteration_median0.4900816364247653
complete-iteration_min0.48247194214712213
deviation-center-line_max1.0298821173642438
deviation-center-line_mean0.7098198987918333
deviation-center-line_min0.3897576802194227
deviation-heading_max2.698653448697972
deviation-heading_mean2.183164455868696
deviation-heading_median2.183164455868696
deviation-heading_min1.6676754630394195
driven_any_max7.529905657396592
driven_any_mean5.18195977715977
driven_any_median5.18195977715977
driven_any_min2.83401389692295
driven_lanedir_consec_max6.862367401540247
driven_lanedir_consec_mean4.5326961371923264
driven_lanedir_consec_min2.203024872844405
driven_lanedir_max7.039718281345724
driven_lanedir_mean4.652482980507246
driven_lanedir_median4.652482980507246
driven_lanedir_min2.265247679668769
get_duckie_state_max2.882903135275539e-06
get_duckie_state_mean2.847775650164378e-06
get_duckie_state_median2.847775650164378e-06
get_duckie_state_min2.812648165053216e-06
get_robot_state_max0.012572846080683457
get_robot_state_mean0.011777528583008396
get_robot_state_median0.011777528583008396
get_robot_state_min0.01098221108533334
get_state_dump_max0.02096193078635395
get_state_dump_mean0.019296424828676145
get_state_dump_median0.019296424828676145
get_state_dump_min0.017630918870998335
get_ui_image_max0.06726255969724794
get_ui_image_mean0.06532915329915671
get_ui_image_median0.06532915329915671
get_ui_image_min0.06339574690106549
in-drivable-lane_max1.5499999999999945
in-drivable-lane_mean1.0999999999999992
in-drivable-lane_min0.6500000000000039
per-episodes
details{"LFI-full-4way-000-ego0": {"driven_any": 2.83401389692295, "get_ui_image": 0.06726255969724794, "step_physics": 0.2600953872653021, "survival_time": 6.849999999999984, "driven_lanedir": 2.265247679668769, "get_state_dump": 0.02096193078635395, "get_robot_state": 0.01098221108533334, "sim_render-ego0": 0.010830848113350246, "get_duckie_state": 2.812648165053216e-06, "in-drivable-lane": 1.5499999999999945, "deviation-heading": 1.6676754630394195, "agent_compute-ego0": 0.08518331465513809, "complete-iteration": 0.4976913307024085, "set_robot_commands": 0.008142903231192326, "deviation-center-line": 0.3897576802194227, "driven_lanedir_consec": 2.203024872844405, "sim_compute_sim_state": 0.027097435965054276, "sim_compute_performance-ego0": 0.00690544515416242}, "LFI-full-udem1-000-ego0": {"driven_any": 7.529905657396592, "get_ui_image": 0.06339574690106549, "step_physics": 0.25007617398153376, "survival_time": 15.750000000000089, "driven_lanedir": 7.039718281345724, "get_state_dump": 0.017630918870998335, "get_robot_state": 0.012572846080683457, "sim_render-ego0": 0.009793752356420589, "get_duckie_state": 2.882903135275539e-06, "in-drivable-lane": 0.6500000000000039, "deviation-heading": 2.698653448697972, "agent_compute-ego0": 0.08428783205491078, "complete-iteration": 0.48247194214712213, "set_robot_commands": 0.008273040946525863, "deviation-center-line": 1.0298821173642438, "driven_lanedir_consec": 6.862367401540247, "sim_compute_sim_state": 0.029171578491790383, "sim_compute_performance-ego0": 0.0070517629007749915}}
set_robot_commands_max0.008273040946525863
set_robot_commands_mean0.008207972088859095
set_robot_commands_median0.008207972088859095
set_robot_commands_min0.008142903231192326
sim_compute_performance-ego0_max0.0070517629007749915
sim_compute_performance-ego0_mean0.0069786040274687055
sim_compute_performance-ego0_median0.0069786040274687055
sim_compute_performance-ego0_min0.00690544515416242
sim_compute_sim_state_max0.029171578491790383
sim_compute_sim_state_mean0.028134507228422333
sim_compute_sim_state_median0.028134507228422333
sim_compute_sim_state_min0.027097435965054276
sim_render-ego0_max0.010830848113350246
sim_render-ego0_mean0.010312300234885417
sim_render-ego0_median0.010312300234885417
sim_render-ego0_min0.009793752356420589
simulation-passed1
step_physics_max0.2600953872653021
step_physics_mean0.2550857806234179
step_physics_median0.2550857806234179
step_physics_min0.25007617398153376
survival_time_max15.750000000000089
survival_time_mean11.300000000000036
survival_time_min6.849999999999984
No reset possible
6210813578Márton Tim 🇭🇺3626aido-LFV_multi-sim-validation356host-errornoreg020:01:48
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego1" aborted with the following error:

error in ego1 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "solution.py", line 29, in init
              ||     self.model = RLlibModel(0, experiment_idx=0, checkpoint_idx=0, logger=context)
              ||   File "/submission/model.py", line 42, in __init__
              ||     dummy_env = wrap_env(config["env_config"], extra_config={
              ||   File "/submission/duckietown_utils/env.py", line 46, in wrap_env
              ||     env = SegmentObsWrapper(env, model=extra_config['model'])
              ||   File "/submission/duckietown_utils/wrappers/SegmentObsWrapper.py", line 43, in __init__
              ||     self.model.cuda()
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6209613579Andras Beres202-1aido-LF-sim-testing348successnoreg021:12:00
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median28.54292211860801
survival_time_median59.99999999999873
deviation-center-line_median4.063715948204074
in-drivable-lane_median1.4749999999999628


other stats
agent_compute-ego0_max0.1656152354390496
agent_compute-ego0_mean0.142004663749698
agent_compute-ego0_median0.1364188324303353
agent_compute-ego0_min0.12956575469907178
complete-iteration_max0.6554403745760826
complete-iteration_mean0.5415467553889126
complete-iteration_median0.5280729631500181
complete-iteration_min0.45460072067953167
deviation-center-line_max4.198790176270187
deviation-center-line_mean4.0097935112631085
deviation-center-line_min3.7129519723740994
deviation-heading_max10.501109418167191
deviation-heading_mean9.169146155376255
deviation-heading_median9.353276291646251
deviation-heading_min7.468922620045325
driven_any_max30.929537302909985
driven_any_mean29.456264031867512
driven_any_median29.67173905879683
driven_any_min27.5520407069664
driven_lanedir_consec_max30.346691393765976
driven_lanedir_consec_mean28.35996491236484
driven_lanedir_consec_min26.007324018477345
driven_lanedir_max30.346691393765976
driven_lanedir_mean28.35996491236484
driven_lanedir_median28.54292211860801
driven_lanedir_min26.007324018477345
get_duckie_state_max2.9158135635668195e-06
get_duckie_state_mean2.7967531615550275e-06
get_duckie_state_median2.762955690204452e-06
get_duckie_state_min2.7452877022443862e-06
get_robot_state_max0.01933438692561395
get_robot_state_mean0.014915271216288497
get_robot_state_median0.013820560151194651
get_robot_state_min0.012685577637150725
get_state_dump_max0.045169447383515346
get_state_dump_mean0.029085438366635057
get_state_dump_median0.02583079463139263
get_state_dump_min0.019510716820239624
get_ui_image_max0.07596327065428925
get_ui_image_mean0.06186375902852448
get_ui_image_median0.05924564555324584
get_ui_image_min0.05300047435331702
in-drivable-lane_max3.1499999999999275
in-drivable-lane_mean1.5249999999999633
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 30.929537302909985, "get_ui_image": 0.05300047435331702, "step_physics": 0.19137287477371, "survival_time": 59.99999999999873, "driven_lanedir": 30.346691393765976, "get_state_dump": 0.019510716820239624, "get_robot_state": 0.012685577637150725, "sim_render-ego0": 0.010426386508417566, "get_duckie_state": 2.7452877022443862e-06, "in-drivable-lane": 0.4999999999999858, "deviation-heading": 7.468922620045325, "agent_compute-ego0": 0.12956575469907178, "complete-iteration": 0.45460072067953167, "set_robot_commands": 0.006623462872342404, "deviation-center-line": 3.7129519723740994, "driven_lanedir_consec": 30.346691393765976, "sim_compute_sim_state": 0.024626432310830147, "sim_compute_performance-ego0": 0.00660694667838396}, "LF-norm-zigzag-000-ego0": {"driven_any": 27.5520407069664, "get_ui_image": 0.06397351853357168, "step_physics": 0.29530906319916, "survival_time": 59.99999999999873, "driven_lanedir": 26.007324018477345, "get_state_dump": 0.021756601571838223, "get_robot_state": 0.013083012673777409, "sim_render-ego0": 0.010741651703376357, "get_duckie_state": 2.9158135635668195e-06, "in-drivable-lane": 2.44999999999994, "deviation-heading": 10.501109418167191, "agent_compute-ego0": 0.13059077393899451, "complete-iteration": 0.5846112425579417, "set_robot_commands": 0.006791935078210378, "deviation-center-line": 3.9997216720454754, "driven_lanedir_consec": 26.007324018477345, "sim_compute_sim_state": 0.034658348431297385, "sim_compute_performance-ego0": 0.00752272296209915}, "LF-norm-techtrack-000-ego0": {"driven_any": 29.31637888848468, "get_ui_image": 0.07596327065428925, "step_physics": 0.269666473236211, "survival_time": 59.99999999999873, "driven_lanedir": 27.65188092892861, "get_state_dump": 0.045169447383515346, "get_robot_state": 0.01933438692561395, "sim_render-ego0": 0.014008129169105193, "get_duckie_state": 2.746081769118996e-06, "in-drivable-lane": 3.1499999999999275, "deviation-heading": 8.836119177628508, "agent_compute-ego0": 0.1656152354390496, "complete-iteration": 0.6554403745760826, "set_robot_commands": 0.010227077708057718, "deviation-center-line": 4.198790176270187, "driven_lanedir_consec": 27.65188092892861, "sim_compute_sim_state": 0.046432628321905714, "sim_compute_performance-ego0": 0.008840476940513948}, "LF-norm-small_loop-000-ego0": {"driven_any": 30.02709922910898, "get_ui_image": 0.05451777257291999, "step_physics": 0.182063338361513, "survival_time": 59.99999999999873, "driven_lanedir": 29.43396330828741, "get_state_dump": 0.029904987690947037, "get_robot_state": 0.0145581076286119, "sim_render-ego0": 0.011393344769569162, "get_duckie_state": 2.779829611289908e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.870433405663997, "agent_compute-ego0": 0.14224689092167608, "complete-iteration": 0.4715346837420944, "set_robot_commands": 0.009604200534677624, "deviation-center-line": 4.127710224362671, "driven_lanedir_consec": 29.43396330828741, "sim_compute_sim_state": 0.01965230350986706, "sim_compute_performance-ego0": 0.00741643155246452}}
set_robot_commands_max0.010227077708057718
set_robot_commands_mean0.008311669048322031
set_robot_commands_median0.008198067806444002
set_robot_commands_min0.006623462872342404
sim_compute_performance-ego0_max0.008840476940513948
sim_compute_performance-ego0_mean0.007596644533365394
sim_compute_performance-ego0_median0.007469577257281835
sim_compute_performance-ego0_min0.00660694667838396
sim_compute_sim_state_max0.046432628321905714
sim_compute_sim_state_mean0.03134242814347508
sim_compute_sim_state_median0.029642390371063768
sim_compute_sim_state_min0.01965230350986706
sim_render-ego0_max0.014008129169105193
sim_render-ego0_mean0.01164237803761707
sim_render-ego0_median0.01106749823647276
sim_render-ego0_min0.010426386508417566
simulation-passed1
step_physics_max0.29530906319916
step_physics_mean0.2346029373926485
step_physics_median0.2305196740049605
step_physics_min0.182063338361513
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6209413586Andras Beres202-1aido-LFP-sim-validation350successnoreg020:08:14
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median4.6499999999999915
in-drivable-lane_median0.0
driven_lanedir_consec_median1.7964620401090152
deviation-center-line_median0.2765538903866969


other stats
agent_compute-ego0_max0.15565981409128973
agent_compute-ego0_mean0.14086457608675698
agent_compute-ego0_median0.14519330344177864
agent_compute-ego0_min0.11741188337218086
complete-iteration_max0.6552236384533822
complete-iteration_mean0.5995997089074432
complete-iteration_median0.5991312076954374
complete-iteration_min0.5449127817855162
deviation-center-line_max0.4257199605729894
deviation-center-line_mean0.2926605254610903
deviation-center-line_min0.19181436049797795
deviation-heading_max1.214770654593495
deviation-heading_mean0.7819174152775236
deviation-heading_median0.7646811555370905
deviation-heading_min0.38353669544241825
driven_any_max2.646667534116804
driven_any_mean1.7092015228716133
driven_any_median1.82462434016342
driven_any_min0.540889877042809
driven_lanedir_consec_max2.593616185382574
driven_lanedir_consec_mean1.680196324786399
driven_lanedir_consec_min0.53424503354499
driven_lanedir_max2.593616185382574
driven_lanedir_mean1.680196324786399
driven_lanedir_median1.7964620401090152
driven_lanedir_min0.53424503354499
get_duckie_state_max0.09259184351507224
get_duckie_state_mean0.06973125714938111
get_duckie_state_median0.08233392606699597
get_duckie_state_min0.0216653329484603
get_robot_state_max0.015259073135700633
get_robot_state_mean0.013823814685568637
get_robot_state_median0.013828391769485388
get_robot_state_min0.012379402067603134
get_state_dump_max0.03910129628282912
get_state_dump_mean0.0364045839134846
get_state_dump_median0.03705152678588432
get_state_dump_min0.03241398579934064
get_ui_image_max0.079325980328499
get_ui_image_mean0.06685979811312115
get_ui_image_median0.06455635776478727
get_ui_image_min0.05900049659441102
in-drivable-lane_max0.0
in-drivable-lane_mean0.0
in-drivable-lane_min0.0
per-episodes
details{"LFP-norm-loop-000-ego0": {"driven_any": 2.18916975561165, "get_ui_image": 0.05900049659441102, "step_physics": 0.19102853873990616, "survival_time": 5.249999999999989, "driven_lanedir": 2.1640796525441077, "get_state_dump": 0.035444635265278364, "get_robot_state": 0.012620228641438034, "sim_render-ego0": 0.009595789999332067, "get_duckie_state": 0.09259184351507224, "in-drivable-lane": 0.0, "deviation-heading": 0.750563343355278, "agent_compute-ego0": 0.11741188337218086, "complete-iteration": 0.5660520589576578, "set_robot_commands": 0.007176048350784014, "deviation-center-line": 0.2757789780075416, "driven_lanedir_consec": 2.1640796525441077, "sim_compute_sim_state": 0.031230935510599387, "sim_compute_performance-ego0": 0.009736425471755692}, "LFP-norm-zigzag-000-ego0": {"driven_any": 0.540889877042809, "get_ui_image": 0.079325980328499, "step_physics": 0.25082636386790175, "survival_time": 2.3, "driven_lanedir": 0.53424503354499, "get_state_dump": 0.03910129628282912, "get_robot_state": 0.015259073135700633, "sim_render-ego0": 0.008514713733754259, "get_duckie_state": 0.07390686806212081, "in-drivable-lane": 0.0, "deviation-heading": 0.38353669544241825, "agent_compute-ego0": 0.1475325036556163, "complete-iteration": 0.6552236384533822, "set_robot_commands": 0.005893179710875166, "deviation-center-line": 0.19181436049797795, "driven_lanedir_consec": 0.53424503354499, "sim_compute_sim_state": 0.02273569715783951, "sim_compute_performance-ego0": 0.01194270113681225}, "LFP-norm-techtrack-000-ego0": {"driven_any": 1.4600789247151902, "get_ui_image": 0.0669092521434877, "step_physics": 0.22308869478179189, "survival_time": 4.049999999999994, "driven_lanedir": 1.428844427673923, "get_state_dump": 0.03865841830649027, "get_robot_state": 0.012379402067603134, "sim_render-ego0": 0.010101021789922946, "get_duckie_state": 0.09076098407187112, "in-drivable-lane": 0.0, "deviation-heading": 0.778798967718903, "agent_compute-ego0": 0.142854103227941, "complete-iteration": 0.6322103564332171, "set_robot_commands": 0.008208094573602444, "deviation-center-line": 0.27732880276585226, "driven_lanedir_consec": 1.428844427673923, "sim_compute_sim_state": 0.034232750171568335, "sim_compute_performance-ego0": 0.004819474569181117}, "LFP-norm-small_loop-000-ego0": {"driven_any": 2.646667534116804, "get_ui_image": 0.06220346338608686, "step_physics": 0.20527100387741537, "survival_time": 6.749999999999984, "driven_lanedir": 2.593616185382574, "get_state_dump": 0.03241398579934064, "get_robot_state": 0.015036554897532743, "sim_render-ego0": 0.01467286839204676, "get_duckie_state": 0.0216653329484603, "in-drivable-lane": 0.0, "deviation-heading": 1.214770654593495, "agent_compute-ego0": 0.15565981409128973, "complete-iteration": 0.5449127817855162, "set_robot_commands": 0.009772330522537231, "deviation-center-line": 0.4257199605729894, "driven_lanedir_consec": 2.593616185382574, "sim_compute_sim_state": 0.021326866220025456, "sim_compute_performance-ego0": 0.006708371288636152}}
set_robot_commands_max0.009772330522537231
set_robot_commands_mean0.0077624132894497135
set_robot_commands_median0.007692071462193229
set_robot_commands_min0.005893179710875166
sim_compute_performance-ego0_max0.01194270113681225
sim_compute_performance-ego0_mean0.008301743116596304
sim_compute_performance-ego0_median0.008222398380195922
sim_compute_performance-ego0_min0.004819474569181117
sim_compute_sim_state_max0.034232750171568335
sim_compute_sim_state_mean0.02738156226500817
sim_compute_sim_state_median0.02698331633421945
sim_compute_sim_state_min0.021326866220025456
sim_render-ego0_max0.01467286839204676
sim_render-ego0_mean0.01072109847876401
sim_render-ego0_median0.009848405894627509
sim_render-ego0_min0.008514713733754259
simulation-passed1
step_physics_max0.25082636386790175
step_physics_mean0.2175536503167538
step_physics_median0.21417984932960363
step_physics_min0.19102853873990616
survival_time_max6.749999999999984
survival_time_mean4.5874999999999915
survival_time_min2.3
No reset possible
6207513609Andras Beresfsf+ilaido-LFV_multi-sim-validation356successnoreg021:28:27
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
survival_time_median25.500000000000227
in-drivable-lane_median0.1750000000000016
driven_lanedir_consec_median5.360290638140267
deviation-center-line_median1.6419512001605852


other stats
agent_compute-ego0_max0.1905884283669987
agent_compute-ego0_mean0.13749655981614667
agent_compute-ego0_median0.13020304877697372
agent_compute-ego0_min0.12516921183030552
agent_compute-ego1_max0.199099684353941
agent_compute-ego1_mean0.12487586517596448
agent_compute-ego1_median0.11134529707959136
agent_compute-ego1_min0.10883233615820356
complete-iteration_max1.7643214470377098
complete-iteration_mean1.478999549848004
complete-iteration_median1.6127399824370143
complete-iteration_min1.077682435882758
deviation-center-line_max4.685258885331869
deviation-center-line_mean2.1099678146582512
deviation-center-line_min0.46580390692958307
deviation-heading_max14.88277701798551
deviation-heading_mean5.191134890418641
deviation-heading_median3.4508961524946553
deviation-heading_min1.1525089054371125
driven_any_max22.4584202936406
driven_any_mean7.736861551751716
driven_any_median5.422717717740274
driven_any_min1.0082410328396767
driven_lanedir_consec_max21.08988072347855
driven_lanedir_consec_mean7.364004911170505
driven_lanedir_consec_min0.9961498420037288
driven_lanedir_max21.08988072347855
driven_lanedir_mean7.364004911170505
driven_lanedir_median5.360290638140267
driven_lanedir_min0.9961498420037288
get_duckie_state_max3.5479772005149787e-06
get_duckie_state_mean3.275568963865196e-06
get_duckie_state_median3.3460302144938913e-06
get_duckie_state_min2.8077119625873447e-06
get_robot_state_max0.051953022428553736
get_robot_state_mean0.04931664195184721
get_robot_state_median0.04988546782441104
get_robot_state_min0.04426761443570534
get_state_dump_max0.07501767733082268
get_state_dump_mean0.0389291545641392
get_state_dump_median0.03282281369558871
get_state_dump_min0.031126749258918535
get_ui_image_max0.08891100378422723
get_ui_image_mean0.08305114146087998
get_ui_image_median0.08469485023483604
get_ui_image_min0.07662444085067843
in-drivable-lane_max7.0000000000000995
in-drivable-lane_mean1.07142857142858
in-drivable-lane_min0.0
per-episodes
details{"LFV_multi-norm-loop-000-ego0": {"driven_any": 1.0082410328396767, "get_ui_image": 0.07876092066867746, "step_physics": 0.4990105414562088, "survival_time": 13.850000000000062, "driven_lanedir": 0.9961498420037288, "get_state_dump": 0.03479363935456859, "get_robot_state": 0.051953022428553736, "sim_render-ego0": 0.010964995665515929, "sim_render-ego1": 0.008975956079771193, "sim_render-ego2": 0.00990116510459845, "sim_render-ego3": 0.010688417249446292, "get_duckie_state": 3.5479772005149787e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.545925571247038, "agent_compute-ego0": 0.13057148456573486, "agent_compute-ego1": 0.10883233615820356, "agent_compute-ego2": 0.11363598854421712, "agent_compute-ego3": 0.10833718021996588, "complete-iteration": 1.2605957770519118, "set_robot_commands": 0.006361283844323467, "deviation-center-line": 1.735239931417952, "driven_lanedir_consec": 0.9961498420037288, "sim_compute_sim_state": 0.038162384959433575, "sim_compute_performance-ego0": 0.006530004439594076, "sim_compute_performance-ego1": 0.006323178895085836, "sim_compute_performance-ego2": 0.005958707212544174, "sim_compute_performance-ego3": 0.005927603879420877}, "LFV_multi-norm-loop-000-ego1": {"driven_any": 3.258770474720355, "get_ui_image": 0.07876092066867746, "step_physics": 0.4990105414562088, "survival_time": 13.850000000000062, "driven_lanedir": 3.2337016565524217, "get_state_dump": 0.03479363935456859, "get_robot_state": 0.051953022428553736, "sim_render-ego0": 0.010964995665515929, "sim_render-ego1": 0.008975956079771193, "sim_render-ego2": 0.00990116510459845, "sim_render-ego3": 0.010688417249446292, "get_duckie_state": 3.5479772005149787e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.8544960444546943, "agent_compute-ego0": 0.13057148456573486, "agent_compute-ego1": 0.10883233615820356, "agent_compute-ego2": 0.11363598854421712, "agent_compute-ego3": 0.10833718021996588, "complete-iteration": 1.2605957770519118, "set_robot_commands": 0.006361283844323467, "deviation-center-line": 0.6167840099545502, "driven_lanedir_consec": 3.2337016565524217, "sim_compute_sim_state": 0.038162384959433575, "sim_compute_performance-ego0": 0.006530004439594076, "sim_compute_performance-ego1": 0.006323178895085836, "sim_compute_performance-ego2": 0.005958707212544174, "sim_compute_performance-ego3": 0.005927603879420877}, "LFV_multi-norm-loop-000-ego2": {"driven_any": 1.1187115731218191, "get_ui_image": 0.07876092066867746, "step_physics": 0.4990105414562088, "survival_time": 13.850000000000062, "driven_lanedir": 1.1131295310046354, "get_state_dump": 0.03479363935456859, "get_robot_state": 0.051953022428553736, "sim_render-ego0": 0.010964995665515929, "sim_render-ego1": 0.008975956079771193, "sim_render-ego2": 0.00990116510459845, "sim_render-ego3": 0.010688417249446292, "get_duckie_state": 3.5479772005149787e-06, "in-drivable-lane": 0.0, "deviation-heading": 2.8838969112830357, "agent_compute-ego0": 0.13057148456573486, "agent_compute-ego1": 0.10883233615820356, "agent_compute-ego2": 0.11363598854421712, "agent_compute-ego3": 0.10833718021996588, "complete-iteration": 1.2605957770519118, "set_robot_commands": 0.006361283844323467, "deviation-center-line": 0.7423025763132469, "driven_lanedir_consec": 1.1131295310046354, "sim_compute_sim_state": 0.038162384959433575, "sim_compute_performance-ego0": 0.006530004439594076, "sim_compute_performance-ego1": 0.006323178895085836, "sim_compute_performance-ego2": 0.005958707212544174, "sim_compute_performance-ego3": 0.005927603879420877}, "LFV_multi-norm-loop-000-ego3": {"driven_any": 6.713440324010016, "get_ui_image": 0.07876092066867746, "step_physics": 0.4990105414562088, "survival_time": 13.850000000000062, "driven_lanedir": 6.638092313984866, "get_state_dump": 0.03479363935456859, "get_robot_state": 0.051953022428553736, "sim_render-ego0": 0.010964995665515929, "sim_render-ego1": 0.008975956079771193, "sim_render-ego2": 0.00990116510459845, "sim_render-ego3": 0.010688417249446292, "get_duckie_state": 3.5479772005149787e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.7142911499450282, "agent_compute-ego0": 0.13057148456573486, "agent_compute-ego1": 0.10883233615820356, "agent_compute-ego2": 0.11363598854421712, "agent_compute-ego3": 0.10833718021996588, "complete-iteration": 1.2605957770519118, "set_robot_commands": 0.006361283844323467, "deviation-center-line": 0.8659188623204263, "driven_lanedir_consec": 6.638092313984866, "sim_compute_sim_state": 0.038162384959433575, "sim_compute_performance-ego0": 0.006530004439594076, "sim_compute_performance-ego1": 0.006323178895085836, "sim_compute_performance-ego2": 0.005958707212544174, "sim_compute_performance-ego3": 0.005927603879420877}, "LFV_multi-norm-zigzag-000-ego0": {"driven_any": 22.4584202936406, "get_ui_image": 0.08891100378422723, "step_physics": 0.9845537465928376, "survival_time": 48.099999999999405, "driven_lanedir": 21.08988072347855, "get_state_dump": 0.03282281369558871, "get_robot_state": 0.04988546782441104, "sim_render-ego0": 0.010189216332519783, "sim_render-ego1": 0.00966342067421411, "sim_render-ego2": 0.009309081149868751, "sim_render-ego3": 0.008682929095449477, "get_duckie_state": 3.3460302144938913e-06, "in-drivable-lane": 2.599999999999987, "deviation-heading": 7.488414182630852, "agent_compute-ego0": 0.12516921183030552, "agent_compute-ego1": 0.11134529707959136, "agent_compute-ego2": 0.1091438382090314, "agent_compute-ego3": 0.1093684874343476, "complete-iteration": 1.7643214470377098, "set_robot_commands": 0.007262798855980608, "deviation-center-line": 3.5117593562722984, "driven_lanedir_consec": 21.08988072347855, "sim_compute_sim_state": 0.06477536442123841, "sim_compute_performance-ego0": 0.007168443527300905, "sim_compute_performance-ego1": 0.005452958718019606, "sim_compute_performance-ego2": 0.005030844441826841, "sim_compute_performance-ego3": 0.005130230328251887}, "LFV_multi-norm-zigzag-000-ego1": {"driven_any": 11.600709578828434, "get_ui_image": 0.08891100378422723, "step_physics": 0.9845537465928376, "survival_time": 48.099999999999405, "driven_lanedir": 10.92528890466095, "get_state_dump": 0.03282281369558871, "get_robot_state": 0.04988546782441104, "sim_render-ego0": 0.010189216332519783, "sim_render-ego1": 0.00966342067421411, "sim_render-ego2": 0.009309081149868751, "sim_render-ego3": 0.008682929095449477, "get_duckie_state": 3.3460302144938913e-06, "in-drivable-lane": 1.2500000000000044, "deviation-heading": 14.88277701798551, "agent_compute-ego0": 0.12516921183030552, "agent_compute-ego1": 0.11134529707959136, "agent_compute-ego2": 0.1091438382090314, "agent_compute-ego3": 0.1093684874343476, "complete-iteration": 1.7643214470377098, "set_robot_commands": 0.007262798855980608, "deviation-center-line": 4.685258885331869, "driven_lanedir_consec": 10.92528890466095, "sim_compute_sim_state": 0.06477536442123841, "sim_compute_performance-ego0": 0.007168443527300905, "sim_compute_performance-ego1": 0.005452958718019606, "sim_compute_performance-ego2": 0.005030844441826841, "sim_compute_performance-ego3": 0.005130230328251887}, "LFV_multi-norm-zigzag-000-ego2": {"driven_any": 14.882625929480804, "get_ui_image": 0.08891100378422723, "step_physics": 0.9845537465928376, "survival_time": 48.099999999999405, "driven_lanedir": 14.082195873383895, "get_state_dump": 0.03282281369558871, "get_robot_state": 0.04988546782441104, "sim_render-ego0": 0.010189216332519783, "sim_render-ego1": 0.00966342067421411, "sim_render-ego2": 0.009309081149868751, "sim_render-ego3": 0.008682929095449477, "get_duckie_state": 3.3460302144938913e-06, "in-drivable-lane": 1.4500000000000108, "deviation-heading": 6.631339641284122, "agent_compute-ego0": 0.12516921183030552, "agent_compute-ego1": 0.11134529707959136, "agent_compute-ego2": 0.1091438382090314, "agent_compute-ego3": 0.1093684874343476, "complete-iteration": 1.7643214470377098, "set_robot_commands": 0.007262798855980608, "deviation-center-line": 3.3761229140722087, "driven_lanedir_consec": 14.082195873383895, "sim_compute_sim_state": 0.06477536442123841, "sim_compute_performance-ego0": 0.007168443527300905, "sim_compute_performance-ego1": 0.005452958718019606, "sim_compute_performance-ego2": 0.005030844441826841, "sim_compute_performance-ego3": 0.005130230328251887}, "LFV_multi-norm-zigzag-000-ego3": {"driven_any": 11.288432337203725, "get_ui_image": 0.08891100378422723, "step_physics": 0.9845537465928376, "survival_time": 48.099999999999405, "driven_lanedir": 10.909233457179283, "get_state_dump": 0.03282281369558871, "get_robot_state": 0.04988546782441104, "sim_render-ego0": 0.010189216332519783, "sim_render-ego1": 0.00966342067421411, "sim_render-ego2": 0.009309081149868751, "sim_render-ego3": 0.008682929095449477, "get_duckie_state": 3.3460302144938913e-06, "in-drivable-lane": 0.3500000000000032, "deviation-heading": 10.291517953394273, "agent_compute-ego0": 0.12516921183030552, "agent_compute-ego1": 0.11134529707959136, "agent_compute-ego2": 0.1091438382090314, "agent_compute-ego3": 0.1093684874343476, "complete-iteration": 1.7643214470377098, "set_robot_commands": 0.007262798855980608, "deviation-center-line": 3.973005565851766, "driven_lanedir_consec": 10.909233457179283, "sim_compute_sim_state": 0.06477536442123841, "sim_compute_performance-ego0": 0.007168443527300905, "sim_compute_performance-ego1": 0.005452958718019606, "sim_compute_performance-ego2": 0.005030844441826841, "sim_compute_performance-ego3": 0.005130230328251887}, "LFV_multi-norm-techtrack-000-ego0": {"driven_any": 9.289474282206614, "get_ui_image": 0.08469485023483604, "step_physics": 0.8311178642243089, "survival_time": 25.500000000000227, "driven_lanedir": 8.76615241756709, "get_state_dump": 0.031126749258918535, "get_robot_state": 0.04863594936064778, "sim_render-ego0": 0.010688935240654096, "sim_render-ego1": 0.009216306727459753, "sim_render-ego2": 0.007777303632224843, "sim_render-ego3": 0.009375058974771816, "get_duckie_state": 3.1666279772256454e-06, "in-drivable-lane": 7.0000000000000995, "deviation-heading": 3.355866733742272, "agent_compute-ego0": 0.13020304877697372, "agent_compute-ego1": 0.11733805270111024, "agent_compute-ego2": 0.11416787690612434, "agent_compute-ego3": 0.11179864523228834, "complete-iteration": 1.6127399824370143, "set_robot_commands": 0.007825631218180498, "deviation-center-line": 1.1572362815813495, "driven_lanedir_consec": 8.76615241756709, "sim_compute_sim_state": 0.05593560959728263, "sim_compute_performance-ego0": 0.007174836679447421, "sim_compute_performance-ego1": 0.005075638541503428, "sim_compute_performance-ego2": 0.00490181133471823, "sim_compute_performance-ego3": 0.005515518487083002}, "LFV_multi-norm-techtrack-000-ego1": {"driven_any": 3.634422435690906, "get_ui_image": 0.08469485023483604, "step_physics": 0.8311178642243089, "survival_time": 25.500000000000227, "driven_lanedir": 3.060634333986818, "get_state_dump": 0.031126749258918535, "get_robot_state": 0.04863594936064778, "sim_render-ego0": 0.010688935240654096, "sim_render-ego1": 0.009216306727459753, "sim_render-ego2": 0.007777303632224843, "sim_render-ego3": 0.009375058974771816, "get_duckie_state": 3.1666279772256454e-06, "in-drivable-lane": 1.3000000000000007, "deviation-heading": 9.103409844428269, "agent_compute-ego0": 0.13020304877697372, "agent_compute-ego1": 0.11733805270111024, "agent_compute-ego2": 0.11416787690612434, "agent_compute-ego3": 0.11179864523228834, "complete-iteration": 1.6127399824370143, "set_robot_commands": 0.007825631218180498, "deviation-center-line": 3.5437718854438516, "driven_lanedir_consec": 3.060634333986818, "sim_compute_sim_state": 0.05593560959728263, "sim_compute_performance-ego0": 0.007174836679447421, "sim_compute_performance-ego1": 0.005075638541503428, "sim_compute_performance-ego2": 0.00490181133471823, "sim_compute_performance-ego3": 0.005515518487083002}, "LFV_multi-norm-techtrack-000-ego2": {"driven_any": 4.131995111470532, "get_ui_image": 0.08469485023483604, "step_physics": 0.8311178642243089, "survival_time": 25.500000000000227, "driven_lanedir": 4.082488962295669, "get_state_dump": 0.031126749258918535, "get_robot_state": 0.04863594936064778, "sim_render-ego0": 0.010688935240654096, "sim_render-ego1": 0.009216306727459753, "sim_render-ego2": 0.007777303632224843, "sim_render-ego3": 0.009375058974771816, "get_duckie_state": 3.1666279772256454e-06, "in-drivable-lane": 0.0, "deviation-heading": 3.200506294096474, "agent_compute-ego0": 0.13020304877697372, "agent_compute-ego1": 0.11733805270111024, "agent_compute-ego2": 0.11416787690612434, "agent_compute-ego3": 0.11179864523228834, "complete-iteration": 1.6127399824370143, "set_robot_commands": 0.007825631218180498, "deviation-center-line": 2.793224769868812, "driven_lanedir_consec": 4.082488962295669, "sim_compute_sim_state": 0.05593560959728263, "sim_compute_performance-ego0": 0.007174836679447421, "sim_compute_performance-ego1": 0.005075638541503428, "sim_compute_performance-ego2": 0.00490181133471823, "sim_compute_performance-ego3": 0.005515518487083002}, "LFV_multi-norm-techtrack-000-ego3": {"driven_any": 12.166247135608948, "get_ui_image": 0.08469485023483604, "step_physics": 0.8311178642243089, "survival_time": 25.500000000000227, "driven_lanedir": 11.573623696993607, "get_state_dump": 0.031126749258918535, "get_robot_state": 0.04863594936064778, "sim_render-ego0": 0.010688935240654096, "sim_render-ego1": 0.009216306727459753, "sim_render-ego2": 0.007777303632224843, "sim_render-ego3": 0.009375058974771816, "get_duckie_state": 3.1666279772256454e-06, "in-drivable-lane": 1.050000000000015, "deviation-heading": 3.939493375876067, "agent_compute-ego0": 0.13020304877697372, "agent_compute-ego1": 0.11733805270111024, "agent_compute-ego2": 0.11416787690612434, "agent_compute-ego3": 0.11179864523228834, "complete-iteration": 1.6127399824370143, "set_robot_commands": 0.007825631218180498, "deviation-center-line": 1.548662468903218, "driven_lanedir_consec": 11.573623696993607, "sim_compute_sim_state": 0.05593560959728263, "sim_compute_performance-ego0": 0.007174836679447421, "sim_compute_performance-ego1": 0.005075638541503428, "sim_compute_performance-ego2": 0.00490181133471823, "sim_compute_performance-ego3": 0.005515518487083002}, "LFV_multi-norm-small_loop-000-ego0": {"driven_any": 3.542051695809606, "get_ui_image": 0.07662444085067843, "step_physics": 0.3555649245007438, "survival_time": 7.99999999999998, "driven_lanedir": 3.485589746288337, "get_state_dump": 0.07501767733082268, "get_robot_state": 0.04426761443570534, "sim_render-ego0": 0.01937699169846055, "sim_render-ego1": 0.014648483406682932, "get_duckie_state": 2.8077119625873447e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.1525089054371125, "agent_compute-ego0": 0.1905884283669987, "agent_compute-ego1": 0.199099684353941, "complete-iteration": 1.077682435882758, "set_robot_commands": 0.0135178047677745, "deviation-center-line": 0.5244579909543857, "driven_lanedir_consec": 3.485589746288337, "sim_compute_sim_state": 0.04736285476210695, "sim_compute_performance-ego0": 0.018840495103634666, "sim_compute_performance-ego1": 0.007579550239610376}, "LFV_multi-norm-small_loop-000-ego1": {"driven_any": 3.222519519891979, "get_ui_image": 0.07662444085067843, "step_physics": 0.3555649245007438, "survival_time": 7.99999999999998, "driven_lanedir": 3.1399072970072126, "get_state_dump": 0.07501767733082268, "get_robot_state": 0.04426761443570534, "sim_render-ego0": 0.01937699169846055, "sim_render-ego1": 0.014648483406682932, "get_duckie_state": 2.8077119625873447e-06, "in-drivable-lane": 0.0, "deviation-heading": 1.6314448400562316, "agent_compute-ego0": 0.1905884283669987, "agent_compute-ego1": 0.199099684353941, "complete-iteration": 1.077682435882758, "set_robot_commands": 0.0135178047677745, "deviation-center-line": 0.46580390692958307, "driven_lanedir_consec": 3.1399072970072126, "sim_compute_sim_state": 0.04736285476210695, "sim_compute_performance-ego0": 0.018840495103634666, "sim_compute_performance-ego1": 0.007579550239610376}}
set_robot_commands_max0.0135178047677745
set_robot_commands_mean0.008059604657820522
set_robot_commands_median0.007262798855980608
set_robot_commands_min0.006361283844323467
sim_compute_performance-ego0_max0.018840495103634666
sim_compute_performance-ego0_mean0.008655294913759925
sim_compute_performance-ego0_median0.007168443527300905
sim_compute_performance-ego0_min0.006530004439594076
sim_compute_performance-ego1_max0.007579550239610376
sim_compute_performance-ego1_mean0.005897586078404017
sim_compute_performance-ego1_median0.005452958718019606
sim_compute_performance-ego1_min0.005075638541503428
sim_compute_sim_state_max0.06477536442123841
sim_compute_sim_state_mean0.052158510388288025
sim_compute_sim_state_median0.05593560959728263
sim_compute_sim_state_min0.038162384959433575
sim_render-ego0_max0.01937699169846055
sim_render-ego0_mean0.011866183739405735
sim_render-ego0_median0.010688935240654096
sim_render-ego0_min0.010189216332519783
sim_render-ego1_max0.014648483406682932
sim_render-ego1_mean0.010051407195653292
sim_render-ego1_median0.009216306727459753
sim_render-ego1_min0.008975956079771193
simulation-passed1
step_physics_max0.9845537465928376
step_physics_mean0.7121327470067792
step_physics_median0.8311178642243089
step_physics_min0.3555649245007438
survival_time_max48.099999999999405
survival_time_mean26.12857142857134
survival_time_min7.99999999999998
No reset possible
6206713625Raphael Jeanmobile-segmentationaido-LF-sim-testing348successnoreg021:12:49
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median27.152484990780785
survival_time_median59.99999999999873
deviation-center-line_median2.2313238678044396
in-drivable-lane_median0.0


other stats
agent_compute-ego0_max0.2052702334004576
agent_compute-ego0_mean0.1811578692147178
agent_compute-ego0_median0.20236379607928784
agent_compute-ego0_min0.11463365129983792
complete-iteration_max0.6489576687920005
complete-iteration_mean0.5664365357602268
complete-iteration_median0.5474560252832036
complete-iteration_min0.5218764236824995
deviation-center-line_max2.5527653314327057
deviation-center-line_mean2.1746102612383953
deviation-center-line_min1.683027977911996
deviation-heading_max10.539102670037256
deviation-heading_mean9.096134494986105
deviation-heading_median8.945611422888252
deviation-heading_min7.954212464130658
driven_any_max28.120266416729564
driven_any_mean27.57253139878999
driven_any_median27.579958215947485
driven_any_min27.009942746535422
driven_lanedir_consec_max27.686279566366085
driven_lanedir_consec_mean26.88534278850939
driven_lanedir_consec_min25.550121606109904
driven_lanedir_max27.686279566366085
driven_lanedir_mean26.88534278850939
driven_lanedir_median27.152484990780785
driven_lanedir_min25.550121606109904
get_duckie_state_max3.359498429754989e-06
get_duckie_state_mean3.1931910487039203e-06
get_duckie_state_median3.1764660151574533e-06
get_duckie_state_min3.060333734745785e-06
get_robot_state_max0.013028536509911684
get_robot_state_mean0.012485267319945271
get_robot_state_median0.012800714852510144
get_robot_state_min0.011311103064849117
get_state_dump_max0.021640183824385137
get_state_dump_mean0.01941509627978272
get_state_dump_median0.019070193332795995
get_state_dump_min0.017879814629153744
get_ui_image_max0.06248274552236489
get_ui_image_mean0.05565342567643952
get_ui_image_median0.055728465293865215
get_ui_image_min0.04867402659566277
in-drivable-lane_max1.9500000000000215
in-drivable-lane_mean0.4875000000000054
in-drivable-lane_min0.0
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.92982856232948, "get_ui_image": 0.052628394864580215, "step_physics": 0.22152153578130132, "survival_time": 59.99999999999873, "driven_lanedir": 27.570773795906938, "get_state_dump": 0.021640183824385137, "get_robot_state": 0.011311103064849117, "sim_render-ego0": 0.011443048393001762, "get_duckie_state": 3.149666258139376e-06, "in-drivable-lane": 0.0, "deviation-heading": 7.954212464130658, "agent_compute-ego0": 0.2052702334004576, "complete-iteration": 0.5642528222264298, "set_robot_commands": 0.008888255150292339, "deviation-center-line": 1.683027977911996, "driven_lanedir_consec": 27.570773795906938, "sim_compute_sim_state": 0.024537188722926512, "sim_compute_performance-ego0": 0.0068024757204206655}, "LF-norm-zigzag-000-ego0": {"driven_any": 27.23008786956549, "get_ui_image": 0.06248274552236489, "step_physics": 0.2939328464441355, "survival_time": 59.99999999999873, "driven_lanedir": 26.73419618565463, "get_state_dump": 0.01923157194075636, "get_robot_state": 0.012906518407308687, "sim_render-ego0": 0.010011467905862444, "get_duckie_state": 3.203265772175531e-06, "in-drivable-lane": 0.0, "deviation-heading": 9.34781715237032, "agent_compute-ego0": 0.19964798145945328, "complete-iteration": 0.6489576687920005, "set_robot_commands": 0.010584158663150175, "deviation-center-line": 2.5054815520835567, "driven_lanedir_consec": 26.73419618565463, "sim_compute_sim_state": 0.033170553766420544, "sim_compute_performance-ego0": 0.006776239155333406}, "LF-norm-techtrack-000-ego0": {"driven_any": 27.009942746535422, "get_ui_image": 0.05882853572315022, "step_physics": 0.2701308230972608, "survival_time": 59.99999999999873, "driven_lanedir": 25.550121606109904, "get_state_dump": 0.017879814629153744, "get_robot_state": 0.013028536509911684, "sim_render-ego0": 0.010373660070910043, "get_duckie_state": 3.060333734745785e-06, "in-drivable-lane": 1.9500000000000215, "deviation-heading": 10.539102670037256, "agent_compute-ego0": 0.11463365129983792, "complete-iteration": 0.5306592283399774, "set_robot_commands": 0.007895719399559409, "deviation-center-line": 2.5527653314327057, "driven_lanedir_consec": 25.550121606109904, "sim_compute_sim_state": 0.030693351974296727, "sim_compute_performance-ego0": 0.006986702808631052}, "LF-norm-small_loop-000-ego0": {"driven_any": 28.120266416729564, "get_ui_image": 0.04867402659566277, "step_physics": 0.19400548815826493, "survival_time": 59.99999999999873, "driven_lanedir": 27.686279566366085, "get_state_dump": 0.01890881472483563, "get_robot_state": 0.012694911297711603, "sim_render-ego0": 0.010058971765535656, "get_duckie_state": 3.359498429754989e-06, "in-drivable-lane": 0.0, "deviation-heading": 8.543405693406182, "agent_compute-ego0": 0.2050796106991224, "complete-iteration": 0.5218764236824995, "set_robot_commands": 0.008809526397425566, "deviation-center-line": 1.957166183525322, "driven_lanedir_consec": 27.686279566366085, "sim_compute_sim_state": 0.01680278778076172, "sim_compute_performance-ego0": 0.006631545877575775}}
set_robot_commands_max0.010584158663150175
set_robot_commands_mean0.009044414902606872
set_robot_commands_median0.008848890773858952
set_robot_commands_min0.007895719399559409
sim_compute_performance-ego0_max0.006986702808631052
sim_compute_performance-ego0_mean0.006799240890490225
sim_compute_performance-ego0_median0.006789357437877036
sim_compute_performance-ego0_min0.006631545877575775
sim_compute_sim_state_max0.033170553766420544
sim_compute_sim_state_mean0.026300970561101377
sim_compute_sim_state_median0.02761527034861162
sim_compute_sim_state_min0.01680278778076172
sim_render-ego0_max0.011443048393001762
sim_render-ego0_mean0.010471787033827475
sim_render-ego0_median0.010216315918222852
sim_render-ego0_min0.010011467905862444
simulation-passed1
step_physics_max0.2939328464441355
step_physics_mean0.24489767337024065
step_physics_median0.24582617943928103
step_physics_min0.19400548815826493
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6206513626Raphael Jeanmobile-segmentationaido-LF-sim-validation347failednoreg020:03:50
InvalidSubmission: T [...]
InvalidSubmission:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 190, in main
    raise InvalidSubmission(msg)
duckietown_challenges.exceptions.InvalidSubmission: Timeout during connection to ego0: <SignalTimeout in state: 2>
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6205813640Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LF-sim-testing348successnoreg021:13:34
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
driven_lanedir_consec_median25.852702117456506
survival_time_median59.99999999999873
deviation-center-line_median3.9556165277028423
in-drivable-lane_median1.1999999999999758


other stats
agent_compute-ego0_max0.23730366216114815
agent_compute-ego0_mean0.19896065970443863
agent_compute-ego0_median0.2314601497388105
agent_compute-ego0_min0.09561867717898558
complete-iteration_max0.683411168813904
complete-iteration_mean0.5786985945046494
complete-iteration_median0.5667652716545339
complete-iteration_min0.4978526658956256
deviation-center-line_max4.278006103752316
deviation-center-line_mean3.801250272125431
deviation-center-line_min3.015761929343726
deviation-heading_max14.808898485441834
deviation-heading_mean11.029258448053278
deviation-heading_median10.310393137429571
deviation-heading_min8.687349031912142
driven_any_max27.541554960162912
driven_any_mean27.14400569805142
driven_any_median27.174708279408513
driven_any_min26.685051273225746
driven_lanedir_consec_max26.50060926305035
driven_lanedir_consec_mean25.751615124344436
driven_lanedir_consec_min24.80044699941439
driven_lanedir_max26.50060926305035
driven_lanedir_mean25.751615124344436
driven_lanedir_median25.852702117456506
driven_lanedir_min24.80044699941439
get_duckie_state_max3.2110079242029756e-06
get_duckie_state_mean3.1011289204288584e-06
get_duckie_state_median3.0948756437913067e-06
get_duckie_state_min3.003756469929843e-06
get_robot_state_max0.01341493719324879
get_robot_state_mean0.01271048731649051
get_robot_state_median0.012628699420989304
get_robot_state_min0.012169613230734642
get_state_dump_max0.021695316483039444
get_state_dump_mean0.01943649945906259
get_state_dump_median0.01981974223769773
get_state_dump_min0.016411196877815445
get_ui_image_max0.06620443452903373
get_ui_image_mean0.056820170815838664
get_ui_image_median0.055125994051028845
get_ui_image_min0.05082426063226324
in-drivable-lane_max1.7499999999999574
in-drivable-lane_mean1.2499999999999785
in-drivable-lane_min0.850000000000005
per-episodes
details{"LF-norm-loop-000-ego0": {"driven_any": 27.541554960162912, "get_ui_image": 0.05324929064259144, "step_physics": 0.20569454581413937, "survival_time": 59.99999999999873, "driven_lanedir": 26.50060926305035, "get_state_dump": 0.021695316483039444, "get_robot_state": 0.01341493719324879, "sim_render-ego0": 0.011045359056458485, "get_duckie_state": 3.0821705737975516e-06, "in-drivable-lane": 1.1499999999999757, "deviation-heading": 8.687349031912142, "agent_compute-ego0": 0.22681671276775428, "complete-iteration": 0.5730550485685604, "set_robot_commands": 0.009512071506268377, "deviation-center-line": 3.6577843314723686, "driven_lanedir_consec": 26.50060926305035, "sim_compute_sim_state": 0.02423981504575299, "sim_compute_performance-ego0": 0.007175738765834075}, "LF-norm-zigzag-000-ego0": {"driven_any": 26.99868906424442, "get_ui_image": 0.06620443452903373, "step_physics": 0.2845895068036031, "survival_time": 59.99999999999873, "driven_lanedir": 24.80044699941439, "get_state_dump": 0.019724662059749, "get_robot_state": 0.0125955721420809, "sim_render-ego0": 0.01113991038586873, "get_duckie_state": 3.2110079242029756e-06, "in-drivable-lane": 1.7499999999999574, "deviation-heading": 14.808898485441834, "agent_compute-ego0": 0.23730366216114815, "complete-iteration": 0.683411168813904, "set_robot_commands": 0.008787414414201748, "deviation-center-line": 4.278006103752316, "driven_lanedir_consec": 24.80044699941439, "sim_compute_sim_state": 0.0352780584689481, "sim_compute_performance-ego0": 0.007594122874746712}, "LF-norm-techtrack-000-ego0": {"driven_any": 26.685051273225746, "get_ui_image": 0.057002697459466255, "step_physics": 0.26285689915348154, "survival_time": 59.99999999999873, "driven_lanedir": 25.504815888226982, "get_state_dump": 0.016411196877815445, "get_robot_state": 0.012169613230734642, "sim_render-ego0": 0.010012043802863257, "get_duckie_state": 3.107580713785062e-06, "in-drivable-lane": 0.850000000000005, "deviation-heading": 11.179958174583897, "agent_compute-ego0": 0.09561867717898558, "complete-iteration": 0.4978526658956256, "set_robot_commands": 0.007677214628056026, "deviation-center-line": 3.015761929343726, "driven_lanedir_consec": 25.504815888226982, "sim_compute_sim_state": 0.0293548726519379, "sim_compute_performance-ego0": 0.0065576180530328935}, "LF-norm-small_loop-000-ego0": {"driven_any": 27.3507274945726, "get_ui_image": 0.05082426063226324, "step_physics": 0.19625355480711823, "survival_time": 59.99999999999873, "driven_lanedir": 26.20058834668602, "get_state_dump": 0.019914822415646467, "get_robot_state": 0.012661826699897709, "sim_render-ego0": 0.010408768943704036, "get_duckie_state": 3.003756469929843e-06, "in-drivable-lane": 1.249999999999976, "deviation-heading": 9.440828100275246, "agent_compute-ego0": 0.2361035867098666, "complete-iteration": 0.5604754947405076, "set_robot_commands": 0.008664682842511916, "deviation-center-line": 4.2534487239333165, "driven_lanedir_consec": 26.20058834668602, "sim_compute_sim_state": 0.01843017166004292, "sim_compute_performance-ego0": 0.007022209707445944}}
set_robot_commands_max0.009512071506268377
set_robot_commands_mean0.008660345847759517
set_robot_commands_median0.008726048628356832
set_robot_commands_min0.007677214628056026
sim_compute_performance-ego0_max0.007594122874746712
sim_compute_performance-ego0_mean0.007087422350264906
sim_compute_performance-ego0_median0.007098974236640009
sim_compute_performance-ego0_min0.0065576180530328935
sim_compute_sim_state_max0.0352780584689481
sim_compute_sim_state_mean0.02682572945667048
sim_compute_sim_state_median0.026797343848845447
sim_compute_sim_state_min0.01843017166004292
sim_render-ego0_max0.01113991038586873
sim_render-ego0_mean0.010651520547223623
sim_render-ego0_median0.01072706400008126
sim_render-ego0_min0.010012043802863257
simulation-passed1
step_physics_max0.2845895068036031
step_physics_mean0.2373486266445856
step_physics_median0.23427572248381048
step_physics_min0.19625355480711823
survival_time_max59.99999999999873
survival_time_mean59.99999999999873
survival_time_min59.99999999999873
No reset possible
6205313649Jean-Sébastien Grondin 🇨🇦exercise_ros_templateaido-LFV-sim-validation354host-errornoreg020:06:13
The container "evalu [...]
The container "evaluator" exited with code 1.


Look at the logs for the container to know more about the error.
Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible
6205013593Andras Beres202-1aido-LFV_multi-sim-validation356host-errornoreg020:09:10
InvalidEnvironment: [...]
InvalidEnvironment:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_challenges/cie_concrete.py", line 681, in scoring_context
    yield cie
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 68, in go
    wrap(cie)
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/experiment_manager.py", line 34, in wrap
    asyncio.run(main(cie, logdir, attempts), debug=True)
  File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 248, in main
    raise InvalidEnvironment(msg) from e
duckietown_challenges.exceptions.InvalidEnvironment: Detected out of CUDA memory:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/duckietown_experiment_manager/code.py", line 242, in main
    robot_ci.write_topic_and_expect_zero("seed", config.seed)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 143, in write_topic_and_expect_zero
    msgs = read_reply(self.fpout, timeout=timeout, nickname=self.nickname)
  File "/usr/local/lib/python3.8/site-packages/zuper_nodes_wrapper/wrapper_outside.py", line 309, in read_reply
    raise RemoteNodeAborted(msg)
zuper_nodes.structures.RemoteNodeAborted: The remote node "ego3" aborted with the following error:

error in ego3 |Unexpected error:
              |
              || Traceback (most recent call last):
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 322, in loop
              ||     call_if_fun_exists(node, "init", context=context_data)
              ||   File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
              ||     f(**kwargs)
              ||   File "submission.py", line 60, in init
              ||     self.agent = self.create_agent(self.env)
              ||   File "submission.py", line 162, in create_agent_dagger
              ||     agent = DaggerAgent.load_from_checkpoint(
              ||   File "/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda
              ||     return super().cuda(device=device)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in cuda
              ||     return self._apply(lambda t: t.cuda(device))
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359, in _apply
              ||     module._apply(fn)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381, in _apply
              ||     param_applied = fn(param)
              ||   File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 463, in <lambda>
              ||     return self._apply(lambda t: t.cuda(device))
              || RuntimeError: CUDA error: out of memory
              ||

Artefacts hidden. If you are the author, please login using the top-right link or use the dashboard.
No reset possible