Update README.md
Browse files
README.md
CHANGED
@@ -149,6 +149,8 @@ The model was trained using a Tesla A100 GPU (40GB VRAM) on Google Colab Pro.
|
|
149 |
|
150 |
## π Evaluation Results
|
151 |
|
|
|
|
|
152 |
The fine-tuned DistilBERT model was evaluated on a test dataset containing both phishing and legitimate emails. Below is a summary of its performance compared to baseline models (raw DistilBERT and raw BERT):
|
153 |
|
154 |
### π Fine-Tuned DistilBERT (Best Performing)
|
|
|
149 |
|
150 |
## π Evaluation Results
|
151 |
|
152 |
+
For updated results and runs check this public wandb project. [Full Report](https://wandb.ai/dahalaamosh-harrisburg-university/Phishing_Detection_DistilBERT_Uncased)
|
153 |
+
|
154 |
The fine-tuned DistilBERT model was evaluated on a test dataset containing both phishing and legitimate emails. Below is a summary of its performance compared to baseline models (raw DistilBERT and raw BERT):
|
155 |
|
156 |
### π Fine-Tuned DistilBERT (Best Performing)
|