Resume-Job Matcher LoRA
This is a LoRA fine-tuned version of BAAI/bge-large-en-v1.5 for matching resumes with job descriptions.
Model Details
- Base Model: BAAI/bge-large-en-v1.5
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- LoRA Config: r=8, alpha=16, target_modules=['query', 'key', 'value']
- Dataset: cnamuangtoun/resume-job-description-fit
Usage
Here's how to use this model:
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel
import torch
import torch.nn.functional as F
# Load models
base_model = AutoModel.from_pretrained("BAAI/bge-large-en-v1.5")
model = PeftModel.from_pretrained(base_model, "shashu2325/resume-job-matcher-lora")
tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-large-en-v1.5")
# Example texts
resume_text = "Software engineer with Python experience"
job_text = "Looking for Python developer"
# Process texts
resume_inputs = tokenizer(resume_text, return_tensors="pt", max_length=512, padding="max_length", truncation=True)
job_inputs = tokenizer(job_text, return_tensors="pt", max_length=512, padding="max_length", truncation=True)
# Get embeddings
with torch.no_grad():
# Get embeddings using mean pooling
resume_outputs = model(**resume_inputs)
job_outputs = model(**job_inputs)
# Mean pooling
resume_emb = resume_outputs.last_hidden_state.mean(dim=1)
job_emb = job_outputs.last_hidden_state.mean(dim=1)
# Normalize and calculate similarity
resume_emb = F.normalize(resume_emb, p=2, dim=1)
job_emb = F.normalize(job_emb, p=2, dim=1)
similarity = torch.sum(resume_emb * job_emb, dim=1)
match_score = torch.sigmoid(similarity).item()
print(f"Match score: {match_score:.4f}")
- Downloads last month
- 693
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support