How can we use vllm to serve this gguf?

#3
by pty819 - opened

Hi I tried to do as a topic in the vllm issues, someone said this was fixed. I just download the 2.51bit gguf and then download the tokenizers and change the torch_dtype to float16, but this still reports deepseek v2 gguf was not supported by vllm. Can you help me how to run this by vllm 0.8.4?

root@ubuntu:/data/vllm-serve# uv run vllm serve /data/upload_files/deepseek-R1.gguf --tokenizer /data/upload_files/R1_tokenizer/
INFO 04-25 19:27:48 [init.py:239] Automatically detected platform cuda.
INFO 04-25 19:27:49 [api_server.py:1034] vLLM API server version 0.8.4
INFO 04-25 19:27:49 [api_server.py:1035] args: Namespace(subparser='serve', model_tag='/data/upload_files/deepseek-R1.gguf', config='', host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/data/upload_files/deepseek-R1.gguf', task='auto', tokenizer='/data/upload_files/R1_tokenizer/', hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, load_format='auto', download_dir=None, model_loader_extra_config=None, use_tqdm_on_load=True, config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='auto', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, block_size=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_token=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_chunked_mm_input=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f8f4f6016c0>)
Traceback (most recent call last):
File "/mnt/data/vllm-serve/.venv/bin/vllm", line 10, in
sys.exit(main())
^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 51, in main
args.dispatch_function(args)
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 27, in cmd
uvloop.run(run_server(args))
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/uvloop/init.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/data/python/python3.12.9/install/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/data/python/python3.12.9/install/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1069, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/python/python3.12.9/install/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 146, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/python/python3.12.9/install/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 166, in build_async_engine_client_from_engine_args
vllm_config = engine_args.create_engine_config(usage_context=usage_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1154, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1042, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/config.py", line 423, in init
hf_config = get_config(self.hf_config_path or self.model,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 286, in get_config
config_dict, _ = PretrainedConfig.get_config_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/transformers/configuration_utils.py", line 590, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/transformers/configuration_utils.py", line 681, in _get_config_dict
config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/vllm-serve/.venv/lib/python3.12/site-packages/transformers/modeling_gguf_pytorch_utils.py", line 401, in load_gguf_checkpoint
raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
ValueError: GGUF model with architecture deepseek2 is not supported yet.

finally i know: uv run vllm serve /data/upload_files/deepseek-R1.gguf --enable-reasoning --reasoning-parser deepseek_r1 --hf-config-path /data/upload_files/R1_tokenizer/ --tokenizer /data/upload_files/R1_tokenizer/ --tensor-parallel-size 8 --host 0.0.0.0 --port 55556 --gpu-memory-utilization 0.91

Unsloth AI org

finally i know: uv run vllm serve /data/upload_files/deepseek-R1.gguf --enable-reasoning --reasoning-parser deepseek_r1 --hf-config-path /data/upload_files/R1_tokenizer/ --tokenizer /data/upload_files/R1_tokenizer/ --tensor-parallel-size 8 --host 0.0.0.0 --port 55556 --gpu-memory-utilization 0.91

Wait so it works? Oh wow I had no idea. I think I'll let everyone know to anyone else having the problem thank you

maybe you can let someone to try to load that gguf to vllm with a h800 x 8 machine... in my rtx a6000 *8 machine it reports no video memory fault but not any other faults.
Can you provide the safetensor files so we can use vllm to load that directly?

finally i know: uv run vllm serve /data/upload_files/deepseek-R1.gguf --enable-reasoning --reasoning-parser deepseek_r1 --hf-config-path /data/upload_files/R1_tokenizer/ --tokenizer /data/upload_files/R1_tokenizer/ --tensor-parallel-size 8 --host 0.0.0.0 --port 55556 --gpu-memory-utilization 0.91

Wait so it works? Oh wow I had no idea. I think I'll let everyone know to anyone else having the problem thank you

maybe you can let someone to try to load that gguf to vllm with a h800 x 8 machine... in my rtx a6000 *8 machine it reports no video memory fault but not any other faults.
Can you provide the safetensor files so we can use vllm to load that directly?

Unsloth AI org

finally i know: uv run vllm serve /data/upload_files/deepseek-R1.gguf --enable-reasoning --reasoning-parser deepseek_r1 --hf-config-path /data/upload_files/R1_tokenizer/ --tokenizer /data/upload_files/R1_tokenizer/ --tensor-parallel-size 8 --host 0.0.0.0 --port 55556 --gpu-memory-utilization 0.91

Wait so it works? Oh wow I had no idea. I think I'll let everyone know to anyone else having the problem thank you

maybe you can let someone to try to load that gguf to vllm with a h800 x 8 machine... in my rtx a6000 *8 machine it reports no video memory fault but not any other faults.
Can you provide the safetensor files so we can use vllm to load that directly?

isnt this the safetensor or do you mean the dynamic versions? Im not sure if it can even work for safetensor conversion apologies

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment