Update README.md
Browse filesfeat: Add SafeTensors support and prepare for Hugging Face release
- Add convert_to_safetensors.py script for model conversion
- Create comprehensive HF_README.md for Hugging Face model card
- Organize adapter structure with proper SafeTensors format
- Update modelfile to use lmstudio-community/gemma-3-27b-it-GGUF base model
- Add documentation for SafeTensors usage and conversion
This commit prepares the Gemma 3 Text-to-SQL model for publication on Hugging Face,
ensuring it follows best practices with SafeTensors format for improved security
and performance.
README.md
CHANGED
@@ -1,12 +1,238 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
---
|
6 |
library: gemma3-text-to-sql
|
7 |
---
|
|
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- gretelai/synthetic_text_to_sql
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
base_model:
|
8 |
+
- google/gemma-3-1b-it
|
9 |
+
pipeline_tag: reinforcement-learning
|
10 |
+
tags:
|
11 |
+
- text-to-sql
|
12 |
---
|
13 |
|
14 |
---
|
15 |
library: gemma3-text-to-sql
|
16 |
---
|
17 |
+
# Gemma 3 Text-to-SQL
|
18 |
|
19 |
+
A powerful LoRA-fine-tuned adapter for Gemma 3 that converts natural language questions into SQL queries with high accuracy and contextual understanding.
|
20 |
|
21 |
+
[](https://huggingface.co/spaces)
|
22 |
+
[](https://www.apache.org/licenses/LICENSE-2.0)
|
23 |
+
|
24 |
+
## Overview
|
25 |
+
|
26 |
+
This model is a specialized adapter built on top of Gemma 3 27B that has been fine-tuned to bridge the gap between natural language and SQL. It allows users to describe their data queries in plain English and receive accurate SQL code in return.
|
27 |
+
|
28 |
+
Key capabilities:
|
29 |
+
- Converts natural language questions to SQL queries
|
30 |
+
- Understands database schema context when provided
|
31 |
+
- Generates clean, optimized SQL with proper table joins
|
32 |
+
- Handles complex queries including aggregations, filters, and sorting
|
33 |
+
|
34 |
+
## Model Details
|
35 |
+
|
36 |
+
- **Base Model**: [lmstudio-community/gemma-3-27b-it-GGUF](https://huggingface.co/lmstudio-community/gemma-3-27b-it-GGUF)
|
37 |
+
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
|
38 |
+
- **LoRA Configuration**:
|
39 |
+
- Rank: 16
|
40 |
+
- Alpha: 16
|
41 |
+
- Dropout: 0.05
|
42 |
+
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
43 |
+
- **Training Data**: Synthetic and curated text-to-SQL datasets
|
44 |
+
- **Model Size**: Base (27B parameters) + Adapter (~70MB)
|
45 |
+
- **Format**: SafeTensors
|
46 |
+
|
47 |
+
## Usage
|
48 |
+
|
49 |
+
### Using Transformers Library
|
50 |
+
|
51 |
+
```python
|
52 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
53 |
+
from peft import PeftModel
|
54 |
+
|
55 |
+
# Load base model and tokenizer
|
56 |
+
model_id = "lmstudio-community/gemma-3-27b-it-GGUF"
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
58 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
59 |
+
|
60 |
+
# Load adapter
|
61 |
+
adapter_path = "your-username/gemma-3-text-to-sql" # Replace with your HF model path
|
62 |
+
model = PeftModel.from_pretrained(model, adapter_path)
|
63 |
+
|
64 |
+
# Format prompt
|
65 |
+
question = "Find all customers who made a purchase over $1000 in the last month"
|
66 |
+
prompt = f"Convert the following natural language query to SQL: {question}"
|
67 |
+
|
68 |
+
# Generate response
|
69 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
70 |
+
outputs = model.generate(
|
71 |
+
inputs.input_ids,
|
72 |
+
max_new_tokens=200,
|
73 |
+
temperature=0.7,
|
74 |
+
do_sample=True
|
75 |
+
)
|
76 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
77 |
+
print(response)
|
78 |
+
```
|
79 |
+
|
80 |
+
### Using MLX (Apple Silicon)
|
81 |
+
|
82 |
+
For Apple Silicon users, you can use MLX for efficient inference:
|
83 |
+
|
84 |
+
```python
|
85 |
+
import mlx.core as mx
|
86 |
+
from mlx_lm.utils import get_model_path
|
87 |
+
|
88 |
+
# Setup paths
|
89 |
+
model_path = "lmstudio-community/gemma-3-27b-it-GGUF"
|
90 |
+
adapter_path = "your-username/gemma-3-text-to-sql/adapter_model.safetensors"
|
91 |
+
|
92 |
+
# Run generation
|
93 |
+
prompt = "Convert the following natural language query to SQL: Find all customers in New York"
|
94 |
+
command = f"""
|
95 |
+
python -m mlx_lm.generate \\
|
96 |
+
--model {model_path} \\
|
97 |
+
--adapter {adapter_path} \\
|
98 |
+
--prompt "{prompt}" \\
|
99 |
+
--max-tokens 200 \\
|
100 |
+
--temp 0.7
|
101 |
+
"""
|
102 |
+
```
|
103 |
+
|
104 |
+
### API Usage with Inference Endpoint
|
105 |
+
|
106 |
+
You can also use the Hugging Face Inference API:
|
107 |
+
|
108 |
+
```python
|
109 |
+
import requests
|
110 |
+
|
111 |
+
API_URL = "https://api-inference.huggingface.co/models/your-username/gemma-3-text-to-sql"
|
112 |
+
headers = {"Authorization": f"Bearer {API_TOKEN}"}
|
113 |
+
|
114 |
+
def query(payload):
|
115 |
+
response = requests.post(API_URL, headers=headers, json=payload)
|
116 |
+
return response.json()
|
117 |
+
|
118 |
+
output = query({
|
119 |
+
"inputs": "Convert to SQL: List all customers who placed orders in the last 30 days",
|
120 |
+
"parameters": {"max_new_tokens": 200, "temperature": 0.7}
|
121 |
+
})
|
122 |
+
```
|
123 |
+
|
124 |
+
## Example Inputs and Outputs
|
125 |
+
|
126 |
+
### Example 1: Basic Query
|
127 |
+
|
128 |
+
**Input:**
|
129 |
+
```
|
130 |
+
Find all customers in New York
|
131 |
+
```
|
132 |
+
|
133 |
+
**Output:**
|
134 |
+
```sql
|
135 |
+
SELECT *
|
136 |
+
FROM customers
|
137 |
+
WHERE state = 'NY' OR city = 'New York';
|
138 |
+
```
|
139 |
+
|
140 |
+
### Example 2: Complex Query with Joins
|
141 |
+
|
142 |
+
**Input:**
|
143 |
+
```
|
144 |
+
List the top 5 products by revenue in the last quarter
|
145 |
+
```
|
146 |
+
|
147 |
+
**Output:**
|
148 |
+
```sql
|
149 |
+
SELECT p.product_id, p.product_name, SUM(oi.quantity * oi.unit_price) as revenue
|
150 |
+
FROM products p
|
151 |
+
JOIN order_items oi ON p.product_id = oi.product_id
|
152 |
+
JOIN orders o ON oi.order_id = o.order_id
|
153 |
+
WHERE o.order_date >= DATE_SUB(CURRENT_DATE(), INTERVAL 3 MONTH)
|
154 |
+
GROUP BY p.product_id, p.product_name
|
155 |
+
ORDER BY revenue DESC
|
156 |
+
LIMIT 5;
|
157 |
+
```
|
158 |
+
|
159 |
+
### Example 3: With Schema Context
|
160 |
+
|
161 |
+
**Input:**
|
162 |
+
```
|
163 |
+
Schema:
|
164 |
+
CREATE TABLE employees (
|
165 |
+
employee_id INT PRIMARY KEY,
|
166 |
+
name VARCHAR(100),
|
167 |
+
department VARCHAR(100),
|
168 |
+
salary INT,
|
169 |
+
hire_date DATE
|
170 |
+
);
|
171 |
+
|
172 |
+
Query: Find the average salary by department
|
173 |
+
```
|
174 |
+
|
175 |
+
**Output:**
|
176 |
+
```sql
|
177 |
+
SELECT department, AVG(salary) as average_salary
|
178 |
+
FROM employees
|
179 |
+
GROUP BY department
|
180 |
+
ORDER BY average_salary DESC;
|
181 |
+
```
|
182 |
+
|
183 |
+
## Training Details
|
184 |
+
|
185 |
+
This model was fine-tuned using LoRA, a parameter-efficient fine-tuning technique that significantly reduces the number of trainable parameters while maintaining performance. The training process involved:
|
186 |
+
|
187 |
+
1. **Dataset Preparation**: A combination of synthetic and curated text-to-SQL pairs
|
188 |
+
2. **Training Configuration**:
|
189 |
+
- Learning Rate: 5e-5
|
190 |
+
- Batch Size: 8
|
191 |
+
- Training Steps: 1000
|
192 |
+
- LoRA Rank: 16
|
193 |
+
- Gradient Checkpointing: True
|
194 |
+
3. **Hardware**: Apple Silicon with MLX acceleration
|
195 |
+
|
196 |
+
## Limitations
|
197 |
+
|
198 |
+
- The model performs best when the database schema is included in the prompt
|
199 |
+
- Complex nested queries may require refining the prompt
|
200 |
+
- Performance varies based on domain-specific terminology
|
201 |
+
- The model may occasionally generate SQL syntax that is specific to certain database systems
|
202 |
+
|
203 |
+
## Ethical Considerations
|
204 |
+
|
205 |
+
This model is designed as a productivity tool for database queries and should be used responsibly:
|
206 |
+
|
207 |
+
- Always review and test generated SQL before executing in production environments
|
208 |
+
- Be aware that the model may reflect biases present in its training data
|
209 |
+
- The model should not be used to generate queries intended to exploit database vulnerabilities
|
210 |
+
|
211 |
+
## Citation
|
212 |
+
|
213 |
+
If you use this model in your research or applications, please cite:
|
214 |
+
|
215 |
+
```bibtex
|
216 |
+
@misc{gemma3-text-to-sql,
|
217 |
+
author = {Your Name},
|
218 |
+
title = {Gemma 3 Text-to-SQL: A LoRA-fine-tuned adapter for natural language to SQL conversion},
|
219 |
+
year = {2025},
|
220 |
+
publisher = {HuggingFace},
|
221 |
+
howpublished = {\url{https://huggingface.co/your-username/gemma-3-text-to-sql}}
|
222 |
+
}
|
223 |
+
```
|
224 |
+
|
225 |
+
## License
|
226 |
+
|
227 |
+
This model adapter is licensed under the Apache 2.0 License. Usage of the base Gemma 3 model is subject to Google's Gemma license terms.
|
228 |
+
|
229 |
+
## Acknowledgements
|
230 |
+
|
231 |
+
We thank Google for releasing the Gemma 3 models and the Hugging Face team for their transformers library and model hosting. We also acknowledge the contributions of the MLX team at Apple for enabling efficient inference on Apple Silicon.
|
232 |
+
|
233 |
+
---
|
234 |
+
|
235 |
+
If you find any issues or have suggestions for improvement, please open an issue on the GitHub repository or reach out on the Hugging Face community forums.
|
236 |
+
|
237 |
+
|
238 |
+
This model created by [@parole-study-viper]
|