Uploaded model
- Developed by: AquilaX-AI
- License: apache-2.0
- Finetuned from model : AquilaX-AI/ai_scanner
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
pip install gguf
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
import json
model_id = "AquilaX-AI/AI-Scanner-Quantized"
filename = "unsloth.Q8_0.gguf"
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
sys_prompt = """<|im_start|>system\nYou are Securitron, an AI assistant specialized in detecting vulnerabilities in source code. Analyze the provided code and provide a structured report on any security issues found.<|im_end|>"""
user_prompt = """
CODE FOR SCANNING
"""
prompt = f"""{sys_prompt}
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
"""
encodeds = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.to(device)
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
response = model.generate(
input_ids=encodeds,
streamer=text_streamer,
max_new_tokens=4096,
use_cache=True,
pad_token_id=151645,
eos_token_id=151645,
num_return_sequences=1
)
output = json.loads(tokenizer.decode(response[0]).split('<|im_start|>assistant')[-1].split('<|im_end|>')[0].strip())
- Downloads last month
- 166
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for AquilaX-AI/AI-Scanner-Quantized
Base model
AquilaX-AI/ai_scanner