File size: 5,658 Bytes
93045cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb7ebea
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
 
fb7ebea
 
 
 
f9835d3
 
fb7ebea
f9835d3
fb7ebea
 
 
 
 
 
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
fb7ebea
f9835d3
 
 
 
 
 
fb7ebea
f9835d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb7ebea
f9835d3
 
fb7ebea
f9835d3
 
 
 
 
 
 
 
fb7ebea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9835d3
fb7ebea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9835d3
 
 
 
 
 
 
 
 
 
 
 
08c2f5a
f9835d3
 
fb7ebea
 
f9835d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
license: apache-2.0
datasets:
- pookie3000/trump-interviews
- bananabot/TrumpSpeeches
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
pipeline_tag: text-generation
library_name: peft
tags:
- mistral
- peft
- lora
- adapter
- instruct
- conversational
- trump
- politicial
---
<div align="center">

# Trump Mistral Adapter

<img src="https://img.shields.io/badge/MODEL-Mistral--7B-blue?style=for-the-badge" alt="Model: Mistral-7B"/> <img src="https://img.shields.io/badge/ADAPTER-LoRA-red?style=for-the-badge" alt="Adapter: LoRA"/> <img src="https://img.shields.io/badge/STYLE-Trump-yellow?style=for-the-badge" alt="Style: Trump"/>

</div>

> *"This adapter, believe me folks, it's tremendous. It's the best adapter, everyone says so. We're going to do things with this model that nobody's ever seen before."*

A fine-tuned language model that captures Donald Trump's distinctive speaking style, discourse patterns, and policy positions. This LoRA adapter transforms Mistral-7B-Instruct-v0.2 to emulate the unique rhetorical flourishes and speech cadence of the former U.S. President.

<div align="center">
  <img src="https://img.shields.io/badge/๐Ÿ—ฃ๏ธ Speech Patterns-โœ“-success" alt="Speech Patterns"/> 
  <img src="https://img.shields.io/badge/๐Ÿ›๏ธ Policy Positions-โœ“-success" alt="Policy Positions"/> 
  <img src="https://img.shields.io/badge/๐Ÿ”„ Repetition Style-โœ“-success" alt="Repetition Style"/> 
  <img src="https://img.shields.io/badge/๐Ÿ‘ Hand Gestures-โœ—-lightgrey" alt="Hand Gestures"/>
</div>

## ๐Ÿ” Overview

| Feature | Description |
|:--------|:------------|
| Base Model | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) |
| Architecture | LoRA adapter (Low-Rank Adaptation) |
| Training Focus | Communication style, rhetoric, and response patterns |
| Language | English |

---

## ๐Ÿš€ Getting Started

### ๐Ÿ’ป Python Implementation

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

# Configuration
base_model_id = "mistralai/Mistral-7B-Instruct-v0.2"
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
)

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Apply adapter
model = PeftModel.from_pretrained(model, "nnat03/trump-mistral-adapter")

# Generate response
prompt = "What's your plan for border security?"
input_text = f"<s>[INST] {prompt} [/INST]"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("[/INST]")[-1].strip())
```

### ๐Ÿ”ฎ Ollama Integration

For simplified local deployment:

```bash
# Pull the model
ollama pull nnat03/trump-mistral

# Run the model
ollama run nnat03/trump-mistral
```

Access this model via the [Ollama library](https://ollama.com/library/nnat03/trump-mistral).

---

## ๐Ÿ“Š Example Output

<table>
<tr>
<th width="20%">Topic</th>
<th>Response</th>
</tr>
<tr>
<td>Border Security</td>
<td><i>"First of all, we need the wall. The wall is very important. It's not just a wall, it's steel and concrete and things that are very, very strong. We have 450 miles completed. It's an incredible job."</i></td>
</tr>
<tr>
<td>Joe Biden</td>
<td><i>"Joe Biden, I call him 1% Joe. His numbers are way down. He's a corrupt politician. He's been there for 47 years. Where has he been? What's he done? There's nothing."</i></td>
</tr>
</table>

---

## โš™๏ธ Technical Details

### ๐Ÿ“š Training Data

This model was trained on authentic speech patterns from:

- Trump interviews dataset ([pookie3000/trump-interviews](https://huggingface.co/datasets/pookie3000/trump-interviews))
- Trump speeches dataset ([bananabot/TrumpSpeeches](https://huggingface.co/datasets/bananabot/TrumpSpeeches))

### ๐Ÿ”ง Model Configuration

```
LoRA rank: 16 (tremendous rank, the best rank)
Alpha: 64
Dropout: 0.05
Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
```

### ๐Ÿง  Training Parameters

```
Batch size: 4
Gradient accumulation: 4
Learning rate: 2e-4
Epochs: 3
LR scheduler: cosine
Optimizer: paged_adamw_8bit
Precision: BF16
```

---

## ๐ŸŽฏ Applications

<div align="center">
<table width="80%">
<tr>
<td align="center" width="33%"><b>๐ŸŽ“ Education</b><br><small>Political discourse analysis</small></td>
<td align="center" width="33%"><b>๐Ÿ”ฌ Research</b><br><small>Rhetoric pattern studies</small></td>
<td align="center" width="33%"><b>๐ŸŽญ Creative</b><br><small>Interactive simulations</small></td>
</tr>
</table>
</div>

---

## โš ๏ธ Notes and Limitations

This model mimics a speaking style but does not guarantee factual accuracy or represent actual views. It may reproduce biases present in the training data and is primarily intended for research and educational purposes.

## ๐Ÿ“„ Citation

```bibtex
@misc{nnat03-trump-mistral-adapter,
  author = {nnat03},
  title = {Trump Mistral Adapter},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/nnat03/trump-mistral-adapter}}
}
```

---

<div align="center">
  <p><b>Framework version:</b> PEFT 0.15.0</p>
  <p>Created for NLP research and education</p>
  <p><small>"We're gonna have the best models, believe me."</small></p>
</div>