This is a quantized version of DistilGPT-2 optimized for browser deployment.

Smaller file size (120MB compared to 317MB original model)

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for GhostScientist/distilgpt2-int8-browser-completion

Quantized
(19)
this model