finetuned_without_any_Codestral-22B-v0.1

๐Ÿš€ Finetuned version of Qwen/Qwen2.5-Coder-7B-Instruct by rbharmal.

This model is finetuned for Type Inference in Python.
It was trained using Unsloth for optimized fine-tuning and uses merged LoRA weights for easier deployment.

Model Details

  • Base Model: Qwen2.5-Coder-7B-Instruct
  • Finetuning Method: LoRA (merged)
  • Context Length: 4,000 tokens
  • Quantization: None (full precision, bfloat16)
  • Optimizations: Gradient Checkpointing, 4-bit loading during training

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("rbharmal/finetuned-qwen-2.5-Coder-7B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("rbharmal/finetuned-qwen-2.5-Coder-7B-Instruct")

prompt = "
## Task Description

**Objective**: Examine and identify the data types of various elements such as function parameters, local variables, and function return types in the given Python code.

**Instructions**:
1. For each question below, provide a concise, one-word answer indicating the data type.
2. For arguments and variables inside a function, list every data type they take within the current program context as a comma separated list.
3. Do not include additional explanations or commentary in your answers.
4. If a type's nested level exceeds 2, replace all components at that level and beyond with Any

**Python Code Provided**:
def param_func(x):
    return x

c = param_func("Hello")

**Questions**:
1. What is the return type of the function 'param_func' at line 1, column 5?
2. What is the type of the parameter 'x' at line 1, column 15, in the function 'param_func'?
3. What is the type of the variable 'c' at line 4, column 1? "
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
0
Safetensors
Model size
7.62B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rbharmal/finetuned-qwen-2.5-Coder-7B-Instruct

Base model

Qwen/Qwen2.5-7B
Finetuned
(129)
this model