β‘ Phi-3 Mini Instruct Redactor - GGUF (Q4_K_M)
This is a GGUF-format, Q4_K_M quantized version of phi3-mini-4k-instruct-pii-redactor
, optimized for local inference with llama.cpp and other GGUF-compatible tools.
The model was LoRA-finetuned for redaction/instruction tasks and fully merged before conversion and quantization.
π Format & Details
- Model Base:
microsoft/Phi-3-mini-4k-instruct
- LoRA Merged: β
- GGUF Format: β
- Quantization: Q4_K_M (4-bit grouped, high quality)
- File:
phi3-mini-instruct-redactor-q4km.gguf
- Model Size: ~2β3 GB
π» Usage with llama.cpp
./llama-cli.exe -m phi3-mini-instruct-redactor-q4km.gguf --prompt "John Doe, born in 1985, lives in Berlin." -n 128 -ngl 100
β Works with:
- llama.cpp (CUDA build)
- text-generation-webui
- KoboldCpp
- llamacpp-python
- LM Studio
π Dataset Attribution
This model was fine-tuned using a modified version of the NinjaMasker-PII-Redaction Dataset by King-Harry, which is licensed under the Apache 2.0 License. The dataset was reformatted to suit the training requirements of this model.
β οΈ Disclaimer:
This model is provided "as-is" without any warranties or guarantees regarding its performance, accuracy, or reliability in detecting and redacting personally identifiable information (PII) or other sensitive data.
The model may fail to identify or fully redact all forms of PII, depending on input format, context, or model limitations.
Use of this model is at your own risk.
The authors and maintainers of this model accept no responsibility or liability for any data leakage, compliance violations, or security breaches that may occur as a result of using this model.
π License
MIT License
- Downloads last month
- 20
4-bit
Model tree for christopherheuer/phi3-mini-4k-instruct-pii-redactor-q4_k_m-gguf
Base model
microsoft/Phi-3-mini-4k-instruct