--- library_name: transformers tags: [] --- # Model Card for letxbe/mistral-7b-v03-BoundingDocs-rephrased `letxbe/mistral-7b-v03-BoundingDocs-rephrased` is a fine-tuned Mistral-7B-v0.3 for the Document Question Answering task. It was trained on `BoundingDocs` using the `rephrased` version of the questions. ## Model Details ### Model Description - **Developed by:** LetXBe - **Model Type**: LLM - **Languages**: Multilingual - **License**: CC BY 4.0 - **Finetuned From**: `Mistral-7B-v0.3` - **Input Format**: Text using custom prompt - **Output Format**: JSON ## 🚀 How to Use ### Prompt The model should be prompted with this prompt: ```python TEMPLATE_PROMPT = '''<|startdocument|> {DOCUMENT} <|enddocument|> <|starttask|> Answer the following question about the document: Question: "{QUESTION}" Answer completing the following format: '''json {"value": ""} ''' <|endtask|> ''' ``` where DOCUMENT is the textual content of the document page. ### Inference Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("letxbe/mistral-7b-v03-BoundingDocs-rephrased") model = AutoModelForCausalLM.from_pretrained("letxbe/mistral-7b-v03-BoundingDocs-rephrased") # Encode input input_text = "Your prompt" inputs = tokenizer(input_text, return_tensors="pt") # Generate response outputs = model.generate(**inputs) # Decode and print print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```