khysam2022 commited on
Commit
772d223
verified
1 Parent(s): 929e80c

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +43 -0
  2. config.json +8 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: cc-by-4.0
4
+ tags:
5
+ - gemma
6
+ - rag
7
+ - pdf
8
+ - question-answering
9
+ ---
10
+
11
+ # RAG-DSE-PAST-PAPER-2012-ICT
12
+
13
+ Gemma 3 4B model fine-tuned for answering questions about DSE ICT 2012 past paper using RAG
14
+
15
+ ## Model Description
16
+
17
+ This model is based on Gemma 3 4B and has been optimized for answering questions about PDF documents using Retrieval Augmented Generation (RAG).
18
+
19
+ ## Usage
20
+
21
+ You can use this model with the Transformers library:
22
+
23
+ ```python
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+
26
+ tokenizer = AutoTokenizer.from_pretrained("khysam2022/RAG-DSE-PAST-PAPER-2012-ICT")
27
+ model = AutoModelForCausalLM.from_pretrained("khysam2022/RAG-DSE-PAST-PAPER-2012-ICT")
28
+
29
+ # Example usage with RAG
30
+ prompt = "What is the main topic of this document?"
31
+ inputs = tokenizer(prompt, return_tensors="pt")
32
+ outputs = model.generate(**inputs, max_length=100)
33
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
34
+ print(response)
35
+ ```
36
+
37
+ ## Training Details
38
+
39
+ This model was adapted from Google's Gemma 3 4B model and fine-tuned for PDF question-answering.
40
+
41
+ ## Limitations and Biases
42
+
43
+ This model inherits the limitations and biases of the base Gemma 3 model.
config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "gemma",
3
+ "architectures": [
4
+ "GemmaForCausalLM"
5
+ ],
6
+ "name": "RAG-DSE-PAST-PAPER-2012-ICT",
7
+ "description": "Gemma 3 4B model fine-tuned for answering questions about DSE ICT 2012 past paper using RAG"
8
+ }