runtime error
Exit code: 1. Reason: ��| 3.50G/3.50G [00:09<00:00, 354MB/s] Downloading shards: 100%|██████████| 2/2 [00:37<00:00, 17.20s/it] Downloading shards: 100%|██████████| 2/2 [00:37<00:00, 18.71s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 54, in <module> model, vis_processor = init_model(args) File "/home/user/app/minigpt4/common/eval_utils.py", line 55, in init_model model = model_cls.from_config(model_config).to('cuda') File "/home/user/app/minigpt4/models/minigpt_v2.py", line 114, in from_config model = cls( File "/home/user/app/minigpt4/models/minigpt_v2.py", line 46, in __init__ super().__init__( File "/home/user/app/minigpt4/models/minigpt_base.py", line 41, in __init__ self.llama_model, self.llama_tokenizer = self.init_llm( File "/home/user/app/minigpt4/models/base_model.py", line 178, in init_llm llama_model = LlamaForCausalLM.from_pretrained( File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2881, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3228, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 728, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/usr/local/lib/python3.9/site-packages/transformers/utils/bitsandbytes.py", line 101, in set_module_quantized_tensor_to_device new_value = value.to(device) File "/usr/local/lib/python3.9/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...