message | ==> Entrypoint
Netwo [...]==> Entrypoint
Network configured successfully.
INFO: ROBOT_TYPE is externally set to 'duckiebot'.
<== Entrypoint
DEBUG:commons:version: 6.2.4 *
DEBUG:typing:version: 6.2.3
DEBUG:duckietown_world:duckietown-world version 6.2.38 path /usr/local/lib/python3.8/dist-packages
DEBUG:geometry:PyGeometry-z6 version 2.1.4 path /usr/local/lib/python3.8/dist-packages
DEBUG:aido_schemas:aido-protocols version 6.0.59 path /usr/local/lib/python3.8/dist-packages
DEBUG:nodes:version 6.2.13 path /usr/local/lib/python3.8/dist-packages pyparsing 2.4.6
DEBUG:gym-duckietown:gym-duckietown version 6.1.30 path /usr/local/lib/python3.8/dist-packages
DEBUG:ipce:version 6.1.1 path /usr/local/lib/python3.8/dist-packages
DEBUG:nodes_wrapper:checking implementation
DEBUG:nodes_wrapper:checking implementation OK
DEBUG:nodes_wrapper.PytorchRLTemplateAgent:run_loop
fin: /fifos/ego0-in
fout: fifo:/fifos/ego0-out
DEBUG:nodes_wrapper:Fifo /fifos/ego0-out created. I will block until a reader appears.
DEBUG:nodes_wrapper:Fifo reader appeared for /fifos/ego0-out.
DEBUG:nodes_wrapper.PytorchRLTemplateAgent:Starting reading
fi_desc: /fifos/ego0-in
fo_desc: fifo:/fifos/ego0-out
INFO:nodes_wrapper.PytorchRLTemplateAgent.data:0ae159d52154:PytorchRLTemplateAgent: torch.cuda.is_available = True AIDO_REQUIRE_GPU = None
INFO:nodes_wrapper.PytorchRLTemplateAgent.data:0ae159d52154:PytorchRLTemplateAgent: init()
/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:104: UserWarning:
NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75.
If you want to use the NVIDIA GeForce RTX 3070 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
INFO:nodes_wrapper.PytorchRLTemplateAgent.data:0ae159d52154:PytorchRLTemplateAgent: device 0 of 1; name = 'NVIDIA GeForce RTX 3070'
INFO:aido_schemas:PytorchRLTemplateAgent init
2021-12-01 18:48:09,727 WARNING deprecation.py:38 -- DeprecationWarning: `monitor` has been deprecated. Use `record_env` instead. This will raise an error in the future!
2021-12-01 18:48:09,727 WARNING ppo.py:143 -- `train_batch_size` (128) cannot be achieved with your other settings (num_workers=1 num_envs_per_worker=1 rollout_fragment_length=200)! Auto-adjusting `rollout_fragment_length` to 128.
2021-12-01 18:48:10,009 WARNING services.py:1748 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 67096576 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=10.24gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
(pid=268) DEBUG:commons:version: 6.2.4 *
(pid=268) DEBUG:typing:version: 6.2.3
(pid=268) DEBUG:duckietown_world:duckietown-world version 6.2.38 path /usr/local/lib/python3.8/dist-packages
(pid=268) DEBUG:geometry:PyGeometry-z6 version 2.1.4 path /usr/local/lib/python3.8/dist-packages
(pid=268) DEBUG:aido_schemas:aido-protocols version 6.0.59 path /usr/local/lib/python3.8/dist-packages
(pid=268) DEBUG:nodes:version 6.2.13 path /usr/local/lib/python3.8/dist-packages pyparsing 2.4.6
(pid=268) DEBUG:gym-duckietown:gym-duckietown version 6.1.30 path /usr/local/lib/python3.8/dist-packages
(pid=268)
(pid=268) WARNING:wrappers.general_wrappers:Dummy Duckietown Gym reset() called!
(pid=268) 2021-12-01 18:48:14,405 WARNING deprecation.py:38 -- DeprecationWarning: `SampleBatch['is_training']` has been deprecated. Use `SampleBatch.is_training` instead. This will raise an error in the future!
2021-12-01 18:48:14,542 WARNING deprecation.py:38 -- DeprecationWarning: `SampleBatch['is_training']` has been deprecated. Use `SampleBatch.is_training` instead. This will raise an error in the future!
{'audio': ('xaudio2', 'directsound', 'openal', 'pulse', 'silent'), 'debug_font': False, 'debug_gl': True, 'debug_gl_trace': False, 'debug_gl_trace_args': False, 'debug_graphics_batch': False, 'debug_lib': False, 'debug_media': False, 'debug_texture': False, 'debug_trace': False, 'debug_trace_args': False, 'debug_trace_depth': 1, 'debug_trace_flush': True, 'debug_win32': False, 'debug_x11': False, 'graphics_vbo': True, 'shadow_window': True, 'vsync': None, 'xsync': True, 'xlib_fullscreen_override_redirect': False, 'darwin_cocoa': True, 'search_local_libs': True, 'advanced_font_features': False, 'headless': False, 'headless_device': 0}
{'callbacks': <ray.rllib.agents.callbacks.MultiCallbacks object at 0x7f564f26f3a0>,
'env': 'Duckietown',
'env_config': {'accepted_start_angle_deg': 4,
'action_delay_ratio': 0.0,
'action_type': 'heading',
'aido_wrapper': False,
'camera_rand': False,
'crop_image_top': True,
'distortion': True,
'domain_rand': False,
'dynamics_rand': False,
'episode_max_steps': 10,
'eval': True,
'experiment_name': 'Debug',
'frame_repeating': 0.0,
'frame_skip': 1,
'frame_stacking': True,
'frame_stacking_depth': 3,
'grayscale_image': False,
'mode': 'debug',
'motion_blur': False,
'obstacles': {'duckie': {'density': 0.5, 'static': True},
'duckiebot': {'density': 0, 'static': False}},
'resized_input_shape': '(84, 84)',
'reward_function': 'posangle',
'seed': 0,
'simulation_framerate': 30,
'spawn_forward_obstacle': False,
'spawn_obstacles': False,
'top_crop_divider': 3,
'training_map': 'loop_empty',
'wandb': {'project': 'duckietown-rllib'}},
'evaluation_config': {'monitor': True},
'evaluation_interval': 25,
'evaluation_num_episodes': 10,
'framework': 'torch',
'gamma': 0.99,
'lr': 0.0001,
'monitor': False,
'num_gpus': 1,
'num_workers': 1,
'seed': 1234,
'train_batch_size': 128}
(RolloutWorker pid=268) {'audio': ('xaudio2', 'directsound', 'openal', 'pulse', 'silent'), 'debug_font': False, 'debug_gl': True, 'debug_gl_trace': False, 'debug_gl_trace_args': False, 'debug_graphics_batch': False, 'debug_lib': False, 'debug_media': False, 'debug_texture': False, 'debug_trace': False, 'debug_trace_args': False, 'debug_trace_depth': 1, 'debug_trace_flush': True, 'debug_win32': False, 'debug_x11': False, 'graphics_vbo': True, 'shadow_window': True, 'vsync': None, 'xsync': True, 'xlib_fullscreen_override_redirect': False, 'darwin_cocoa': True, 'search_local_libs': True, 'advanced_font_features': False, 'headless': False, 'headless_device': 0}
ERROR:nodes_wrapper.PytorchRLTemplateAgent:Error in node PytorchRLTemplateAgent
ET: InternalProblem
tb: |Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 388, in loop
| call_if_fun_exists(node, "init", context=context_data)
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
| f(**kwargs)
| File "solution.py", line 32, in init
| self.model = registy(True)
| File "/submission/run.py", line 45, in registy
| trainer = PPOTrainer(config=rllib_config)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 137, in __init__
| Trainer.__init__(self, config, env, logger_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 623, in __init__
| super().__init__(config, logger_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 107, in __init__
| self.setup(copy.deepcopy(self.config))
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 147, in setup
| super().setup(config)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 776, in setup
| self._init(self.config, self.env_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 171, in _init
| self.workers = self._make_workers(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 858, in _make_workers
| return WorkerSet(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 110, in __init__
| self._local_worker = self._make_worker(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 406, in _make_worker
| worker = cls(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 584, in __init__
| self._build_policy_map(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 1384, in _build_policy_map
| self.policy_map.create_policy(name, orig_cls, obs_space, act_space,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_map.py", line 143, in create_policy
| self[policy_id] = class_(observation_space, action_space,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_template.py", line 280, in __init__
| self._initialize_loss_from_dummy_batch(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy.py", line 731, in _initialize_loss_from_dummy_batch
| self.compute_actions_from_input_dict(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 302, in compute_actions_from_input_dict
| return self._compute_action_helper(input_dict, state_batches,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/threading.py", line 21, in wrapper
| return func(self, *a, **k)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 366, in _compute_action_helper
| dist_inputs, state_out = self.model(input_dict, state_batches,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 243, in __call__
| res = self.forward(restored, state or [], seq_lens)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/visionnet.py", line 212, in forward
| conv_out = self._convs(self._features)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
| input = module(input)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/misc.py", line 118, in forward
| return self._model(x)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
| input = module(input)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/padding.py", line 21, in forward
| return F.pad(input, self.padding, 'constant', self.value)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3553, in _pad
| return _VF.constant_pad_nd(input, pad, value)
|RuntimeError: CUDA error: no kernel image is available for execution on the device
|
|The above exception was the direct cause of the following exception:
|
|Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 287, in run_loop
| loop(my_logger, node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin,
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 480, in loop
| raise InternalProblem(msg) from e # XXX
|zuper_nodes.structures.InternalProblem: Unexpected error:
|
|| Traceback (most recent call last):
|| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 388, in loop
|| call_if_fun_exists(node, "init", context=context_data)
|| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
|| f(**kwargs)
|| File "solution.py", line 32, in init
|| self.model = registy(True)
|| File "/submission/run.py", line 45, in registy
|| trainer = PPOTrainer(config=rllib_config)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 137, in __init__
|| Trainer.__init__(self, config, env, logger_creator)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 623, in __init__
|| super().__init__(config, logger_creator)
|| File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 107, in __init__
|| self.setup(copy.deepcopy(self.config))
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 147, in setup
|| super().setup(config)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 776, in setup
|| self._init(self.config, self.env_creator)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 171, in _init
|| self.workers = self._make_workers(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 858, in _make_workers
|| return WorkerSet(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 110, in __init__
|| self._local_worker = self._make_worker(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 406, in _make_worker
|| worker = cls(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 584, in __init__
|| self._build_policy_map(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 1384, in _build_policy_map
|| self.policy_map.create_policy(name, orig_cls, obs_space, act_space,
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_map.py", line 143, in create_policy
|| self[policy_id] = class_(observation_space, action_space,
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_template.py", line 280, in __init__
|| self._initialize_loss_from_dummy_batch(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy.py", line 731, in _initialize_loss_from_dummy_batch
|| self.compute_actions_from_input_dict(
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 302, in compute_actions_from_input_dict
|| return self._compute_action_helper(input_dict, state_batches,
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/threading.py", line 21, in wrapper
|| return func(self, *a, **k)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 366, in _compute_action_helper
|| dist_inputs, state_out = self.model(input_dict, state_batches,
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 243, in __call__
|| res = self.forward(restored, state or [], seq_lens)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/visionnet.py", line 212, in forward
|| conv_out = self._convs(self._features)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
|| result = self.forward(*input, **kwargs)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
|| input = module(input)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
|| result = self.forward(*input, **kwargs)
|| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/misc.py", line 118, in forward
|| return self._model(x)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
|| result = self.forward(*input, **kwargs)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
|| input = module(input)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
|| result = self.forward(*input, **kwargs)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/padding.py", line 21, in forward
|| return F.pad(input, self.padding, 'constant', self.value)
|| File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3553, in _pad
|| return _VF.constant_pad_nd(input, pad, value)
|| RuntimeError: CUDA error: no kernel image is available for execution on the device
||
|
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 388, in loop
call_if_fun_exists(node, "init", context=context_data)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
f(**kwargs)
File "solution.py", line 32, in init
self.model = registy(True)
File "/submission/run.py", line 45, in registy
trainer = PPOTrainer(config=rllib_config)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 137, in __init__
Trainer.__init__(self, config, env, logger_creator)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 623, in __init__
super().__init__(config, logger_creator)
File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 107, in __init__
self.setup(copy.deepcopy(self.config))
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 147, in setup
super().setup(config)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 776, in setup
self._init(self.config, self.env_creator)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 171, in _init
self.workers = self._make_workers(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 858, in _make_workers
return WorkerSet(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 110, in __init__
self._local_worker = self._make_worker(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 406, in _make_worker
worker = cls(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 584, in __init__
self._build_policy_map(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 1384, in _build_policy_map
self.policy_map.create_policy(name, orig_cls, obs_space, act_space,
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_map.py", line 143, in create_policy
self[policy_id] = class_(observation_space, action_space,
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_template.py", line 280, in __init__
self._initialize_loss_from_dummy_batch(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy.py", line 731, in _initialize_loss_from_dummy_batch
self.compute_actions_from_input_dict(
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 302, in compute_actions_from_input_dict
return self._compute_action_helper(input_dict, state_batches,
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/threading.py", line 21, in wrapper
return func(self, *a, **k)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 366, in _compute_action_helper
dist_inputs, state_out = self.model(input_dict, state_batches,
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 243, in __call__
res = self.forward(restored, state or [], seq_lens)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/visionnet.py", line 212, in forward
conv_out = self._convs(self._features)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/misc.py", line 118, in forward
return self._model(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/padding.py", line 21, in forward
return F.pad(input, self.padding, 'constant', self.value)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3553, in _pad
return _VF.constant_pad_nd(input, pad, value)
RuntimeError: CUDA error: no kernel image is available for execution on the device
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 287, in run_loop
loop(my_logger, node_name, fi, fo, node, protocol, tin, tout, config=config, fi_desc=fin,
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 480, in loop
raise InternalProblem(msg) from e # XXX
zuper_nodes.structures.InternalProblem: Unexpected error:
| Traceback (most recent call last):
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 388, in loop
| call_if_fun_exists(node, "init", context=context_data)
| File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/utils.py", line 21, in call_if_fun_exists
| f(**kwargs)
| File "solution.py", line 32, in init
| self.model = registy(True)
| File "/submission/run.py", line 45, in registy
| trainer = PPOTrainer(config=rllib_config)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 137, in __init__
| Trainer.__init__(self, config, env, logger_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 623, in __init__
| super().__init__(config, logger_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/tune/trainable.py", line 107, in __init__
| self.setup(copy.deepcopy(self.config))
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 147, in setup
| super().setup(config)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 776, in setup
| self._init(self.config, self.env_creator)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer_template.py", line 171, in _init
| self.workers = self._make_workers(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/agents/trainer.py", line 858, in _make_workers
| return WorkerSet(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 110, in __init__
| self._local_worker = self._make_worker(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/worker_set.py", line 406, in _make_worker
| worker = cls(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 584, in __init__
| self._build_policy_map(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/evaluation/rollout_worker.py", line 1384, in _build_policy_map
| self.policy_map.create_policy(name, orig_cls, obs_space, act_space,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_map.py", line 143, in create_policy
| self[policy_id] = class_(observation_space, action_space,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy_template.py", line 280, in __init__
| self._initialize_loss_from_dummy_batch(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/policy.py", line 731, in _initialize_loss_from_dummy_batch
| self.compute_actions_from_input_dict(
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 302, in compute_actions_from_input_dict
| return self._compute_action_helper(input_dict, state_batches,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/utils/threading.py", line 21, in wrapper
| return func(self, *a, **k)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/policy/torch_policy.py", line 366, in _compute_action_helper
| dist_inputs, state_out = self.model(input_dict, state_batches,
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/modelv2.py", line 243, in __call__
| res = self.forward(restored, state or [], seq_lens)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/visionnet.py", line 212, in forward
| conv_out = self._convs(self._features)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
| input = module(input)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/ray/rllib/models/torch/misc.py", line 118, in forward
| return self._model(x)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 117, in forward
| input = module(input)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
| result = self.forward(*input, **kwargs)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/padding.py", line 21, in forward
| return F.pad(input, self.padding, 'constant', self.value)
| File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3553, in _pad
| return _VF.constant_pad_nd(input, pad, value)
| RuntimeError: CUDA error: no kernel image is available for execution on the device
|
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "solution.py", line 128, in <module>
main()
File "solution.py", line 124, in main
wrap_direct(node=node, protocol=protocol)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/interface.py", line 25, in wrap_direct
run_loop(node, protocol, args)
File "/usr/local/lib/python3.8/dist-packages/zuper_nodes_wrapper/wrapper.py", line 301, in run_loop
raise Exception(msg) from e
Exception: Error in node PytorchRLTemplateAgent
|