Error in deserealizing headers

#1
by Amit-Bin-Tariqul - opened

I am trying to use the BanglaLLAMA model .I have tried by loading it from the cache/huggingface/hub/bangla-llama-7b-base-v0.1" but I am continuously getting this issue.

SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

Some parameters are on the meta device because they were offloaded to the cpu.

SafetensorError Traceback (most recent call last)
Cell In[12], line 10
6 # Load tokenizer
7 tokenizer = AutoTokenizer.from_pretrained(model_name, force_download=True)
---> 10 model = AutoModelForCausalLM.from_pretrained(
11 model_name,
12 use_safetensors=False,
13 force_download=True,
14 device_map="auto",
15 )
17 tokenizer.save_pretrained(save_directory)
18 model.save_pretrained(save_directory)

File ~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:564, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
562 elif type(config) in cls._model_mapping.keys():
563 model_class = _get_model_class(config, cls._model_mapping)
--> 564 return model_class.from_pretrained(
565 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
566 )
567 raise ValueError(
568 f"Unrecognized configuration class {config.class} for this kind of AutoModel: {cls.name}.\n"
569 f"Model type should be one of {', '.join(c.name for c in cls._model_mapping.keys())}."
570 )

File ~/.local/lib/python3.10/site-packages/transformers/modeling_utils.py:4309, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs)
4306 model.hf_quantizer = hf_quantizer
4308 if _adapter_model_path is not None:
-> 4309 model.load_adapter(
4310 _adapter_model_path,
4311 adapter_name=adapter_name,
4312 token=token,
4313 adapter_kwargs=adapter_kwargs,
4314 )
4316 if output_loading_info:
4317 if loading_info is None:

File ~/.local/lib/python3.10/site-packages/transformers/integrations/peft.py:226, in PeftAdapterMixin.load_adapter(self, peft_model_id, adapter_name, revision, token, device_map, max_memory, offload_folder, offload_index, peft_config, adapter_state_dict, low_cpu_mem_usage, is_trainable, adapter_kwargs)
223 self._hf_peft_config_loaded = True
225 if peft_model_id is not None:
--> 226 adapter_state_dict = load_peft_weights(peft_model_id, token=token, device=device, **adapter_kwargs)
228 # We need to pre-process the state dict to remove unneeded prefixes - for backward compatibility
229 processed_adapter_state_dict = {}

File ~/.local/lib/python3.10/site-packages/peft/utils/save_and_load.py:444, in load_peft_weights(model_id, device, **hf_hub_download_kwargs)
442 adapters_weights = safe_load_file(filename, device="cpu")
443 else:
--> 444 adapters_weights = safe_load_file(filename, device=device)
445 else:
446 adapters_weights = torch.load(filename, map_location=torch.device(device))

File ~/.local/lib/python3.10/site-packages/safetensors/torch.py:313, in load_file(filename, device)
290 """
291 Loads a safetensors file into torch format.
292
(...)
310
311 """
312 result = {}
--> 313 with safe_open(filename, framework="pt", device=device) as f:
314 for k in f.keys():
315 result[k] = f.get_tensor(k)

SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

I have run this code

Load tokenizer and model

tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir)

print("Model and tokenizer loaded successfully!")
You
9:00 PM

Local path where the model files are stored

model_dir = "/home/cse/.cache/huggingface/hub/models--bangla-llama-7b-base-v0.1"

Load tokenizer and model

tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir)

print("Model and tokenizer loaded successfully!")

Can you tell me how to fix this ?

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment