prithivMLmods commited on
Commit
7ec40cd
Β·
verified Β·
1 Parent(s): 7311f26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -1
README.md CHANGED
@@ -12,4 +12,122 @@ tags:
12
  - ocr
13
  - codec
14
  - qwen2vl
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - ocr
13
  - codec
14
  - qwen2vl
15
+ ---
16
+ ![qwenVL.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/g8zYbOSBt4NSqhSIypaX3.png)
17
+ # **LatexMind-2B-Codec-GGUF**
18
+
19
+ The **LatexMind-2B-Codec-GGUF** model is a fine-tuned version of Qwen2-VL-2B-Instruct, optimized for Optical Character Recognition (OCR), **image-to-text conversion**, and **mathematical expression extraction with LaTeX formatting**. This model integrates a conversational approach with visual and textual understanding to handle multi-modal tasks effectively.
20
+
21
+ # Key Enhancements:
22
+
23
+ * **SoTA understanding of images with various resolutions & aspect ratios**: LatexMind-2B-Codec-GGUF achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
24
+
25
+ * **Advanced LaTeX extraction**: The model specializes in extracting structured mathematical expressions from images and documents, converting them into LaTeX format for precise rendering and further computation.
26
+
27
+ * **Understanding long-duration videos (20min+)**: LatexMind-2B-Codec-GGUF can process videos over 20 minutes long, enabling high-quality video-based question answering, mathematical solution explanation, and educational content creation.
28
+
29
+ * **Agent capabilities for automated operations**: With complex reasoning and decision-making abilities, the model can be integrated with mobile devices, robots, and assistive technologies to automate tasks based on visual and textual inputs.
30
+
31
+ * **Multilingual Support**: To serve global users, in addition to English and Chinese, the model supports text recognition inside images across multiple languages, including European languages, Japanese, Korean, Arabic, Vietnamese, etc.
32
+
33
+ This model is particularly effective in **retrieving mathematical notations and equations** from scanned documents, whiteboard images, and handwritten notes, ensuring accurate conversion to LaTeX code for further academic and computational applications.
34
+
35
+ # Sample Inference with Doc
36
+
37
+ ![latexqwen.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/-h5z3giEudPrdM9qRMMTe.png)
38
+
39
+ Demo: https://huggingface.co/prithivMLmods/LatexMind-2B-Codec-GGUF/blob/main/latexmind/latexmind-codec.ipynb
40
+
41
+ # Use it with Transformers
42
+
43
+ ```python
44
+ from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
45
+ from qwen_vl_utils import process_vision_info
46
+
47
+ # default: Load the model on the available device(s)
48
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
49
+ "prithivMLmods/LatexMind-2B-Codec", torch_dtype="auto", device_map="auto"
50
+ )
51
+
52
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
53
+ # model = Qwen2VLForConditionalGeneration.from_pretrained(
54
+ # "prithivMLmods/LatexMind-2B-Codec-GGUF",
55
+ # torch_dtype=torch.bfloat16,
56
+ # attn_implementation="flash_attention_2",
57
+ # device_map="auto",
58
+ # )
59
+
60
+ # default processer
61
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen2-VL-OCR-2B-Instruct")
62
+
63
+ messages = [
64
+ {
65
+ "role": "user",
66
+ "content": [
67
+ {
68
+ "type": "image",
69
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
70
+ },
71
+ {"type": "text", "text": "Describe this image."},
72
+ ],
73
+ }
74
+ ]
75
+
76
+ # Preparation for inference
77
+ text = processor.apply_chat_template(
78
+ messages, tokenize=False, add_generation_prompt=True
79
+ )
80
+ image_inputs, video_inputs = process_vision_info(messages)
81
+ inputs = processor(
82
+ text=[text],
83
+ images=image_inputs,
84
+ videos=video_inputs,
85
+ padding=True,
86
+ return_tensors="pt",
87
+ )
88
+ inputs = inputs.to("cuda")
89
+
90
+ # Inference: Generation of the output
91
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
92
+ generated_ids_trimmed = [
93
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
94
+ ]
95
+ output_text = processor.batch_decode(
96
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
97
+ )
98
+ print(output_text)
99
+ ```
100
+ # Buf
101
+ ```python
102
+ buffer = ""
103
+ for new_text in streamer:
104
+ buffer += new_text
105
+ # Remove <|im_end|> or similar tokens from the output
106
+ buffer = buffer.replace("<|im_end|>", "")
107
+ yield buffer
108
+ ```
109
+
110
+ # Intended Use
111
+
112
+ **LatexMind-2B-Codec-GGUF** is designed for tasks that require **image-based text recognition**, **math equation extraction**, and **multi-modal understanding**. It is particularly useful in the following scenarios:
113
+
114
+ **Optical Character Recognition (OCR)** – Extracting printed and handwritten text from images, documents, and scanned pages.
115
+ **Math Expression Recognition** – Converting mathematical notations into structured **LaTeX format** for further computation and documentation.
116
+ **Image-to-Text Conversion** – Generating accurate descriptions for text-rich and math-heavy images.
117
+ **Document and Academic Processing** – Assisting researchers, students, and professionals in digitizing handwritten notes and extracting structured content from books, PDFs, and whiteboards.
118
+ **Automated Educational Support** – Enabling AI-powered tutors, content summarization, and interactive learning for subjects involving complex equations.
119
+ **Multi-Language OCR** – Recognizing text inside images across multiple languages, including English, Chinese, Japanese, Korean, Arabic, and various European languages.
120
+ **Video-Based Question Answering** – Understanding long-duration videos for content summarization, question answering, and structured data extraction.
121
+
122
+ # Limitations
123
+
124
+ Despite its capabilities, **LatexMind-2B-Codec-GGUF** has some inherent limitations:
125
+
126
+ **Handwritten Text Accuracy** – While it can recognize handwritten equations, performance may degrade with highly unstructured or messy handwriting.
127
+ **Complex LaTeX Formatting** – The model may struggle with deeply nested or ambiguous LaTeX expressions, requiring manual corrections for precise formatting.
128
+ **Low-Resolution Images** – Extracting accurate text from blurry or low-resolution images can lead to misinterpretations or OCR errors.
129
+ **Contextual Understanding in Multi-Step Equations** – While it recognizes math expressions, solving multi-step problems autonomously may be limited.
130
+ **Limited Support for Rare Mathematical Notations** – Some specialized or domain-specific symbols may not be recognized with high accuracy.
131
+ **Processing Speed for Large Documents** – Performance may slow down when handling extremely large documents or dense mathematical content in real-time applications.
132
+ **Language-Specific OCR Variability** – While it supports multiple languages, OCR accuracy may vary depending on the script complexity and font style.
133
+