Meta LLaMA-3-8B-Instruct GGUF Models

These are quantized versions of Meta's LLaMA-3-8B-Instruct model using IPEX and converted to GGUF format. Supported Formats: f32.gguf, f16.gguf, q8_0.gguf

Usage

from llama_cpp import Llama
llm = Llama(model_path="q8_0.gguf")
response = llm("What is AI?")
print(response)
Downloads last month
47
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ilayaraja3/llama-3-8b-instruct-gguf

Quantized
(410)
this model