
ME 1
AI & ML interests
Recent Activity
Organizations
AIGUYCONTENT's activity
Is there slop in this?

https://huggingface.co/soob3123/amoral-gemma3-27B-v2/tree/main

WAR ON MINISTRATIONS

First review, Q5-K-M require 502Gb RAM, better than Meta 405billions
Interview request: Thoughts on genAI evaluation & documentation
Nemotron 51B

Q6_K vs. Q5_K_L

Model suggestions and requests

Llama 3.1 70B Instruct Lorablated Creative Writer GGUF please

Prompt format

GGUF wen?

BTW..I think you're on to something with what you did to this model. However, I'm still learning about LLMs and cannot offer any advice.
If it's possible, can you somehow make future models follow instructions more carefully?
And I was using Oobabooga.
Thanks but can you modify the html? That was not intentional for my message to you. Here is the HTML modified so HuggingFace will not format it:
<h 2> What Will My New Blue Widget Look Like?
<h 2> Will my green widget be affected by my decision?
<h 2> Does selling a blue widget require a permit?
do you understand or have any questions?
Here is how each section (there are three of them) must be formatted:
h2 the question i provided you)
one paragraph that is 3-4 sentences in length.
</ br>
a 2nd paragraph that is 3-4 sentences in length.
<h 3> this question must build upon the question in the h2.
one paragraph that is 3-4 sentences in length.
I cannot share the answer because it's a blog post for a client. I have modified the three questions in the prompt to remove client info. Outside of that the prompt remains the same.
I think this model is good at following complex instructions. But it's not great at following simple instructions. It went off the rails and refused to modify a single sentence that contained a dependent clause.
It started to not follow my instructions when the conversation reached ~6-7k tokens in length.
Here is the modified prompt (to remove client info):
Take the below three questions and write 2 paragraphs of content that is 3-4 sentences in length for each question. Then immediately beneath the 2nd paragraph, create an H3 that contains another question. That question should build upon the question in the h2. then write 1 paragraph (3-4 sentences in length).
Do NOT add anymore html. Just identify the h2 and h3 and thatโs that. Add a line break in between each paragraph for each h2.
What Will My New Blue Widget Look Like?
Will my green widget be affected by my decision?
Does selling a blue widget require a permit?
Here is how each section (there are three of them) must be formatted:
the question i provided you)
one paragraph that is 3-4 sentences in length.
a 2nd paragraph that is 3-4 sentences in length.
this question must build upon the question in the h2. one paragraph that is 3-4 sentences in length.
Have you tested this on short form content? e.g. professional content for corporate websites?
Your BigQwen2.5-125B-Instruct self-merge solved a writing problem in the first shot. Technically it wasn't a problem...just a regular prompt that required the model to output content with certain HTML tags.
ChatGPT and all other open source models (interestingly enoughโincluding regular Qwen 2.5 72B Q8) all failed no matter how many times they had to get it right.

Detailed Full Workflow
Medium article : https://medium.com/@furkangozukara/ultimate-flux-lora-training-tutorial-windows-and-cloud-deployment-abb72f21cbf8
Windows main tutorial : https://youtu.be/nySGu12Y05k
Cloud tutorial for GPU poor or scaling : https://youtu.be/-uhL2nW7Ddw
Full detailed results and conclusions : https://www.patreon.com/posts/111891669
Full config files and details to train : https://www.patreon.com/posts/110879657
SUPIR Upscaling (default settings are now perfect) : https://youtu.be/OYxVEvDf284
I used my Poco X6 Camera phone and solo taken images
My dataset is far from being ready, thus I have used so many repeating and almost same images, but this was rather experimental
Hopefully I will continue taking more shots and improve dataset and reduce size in future
I trained Clip-L and T5-XXL Text Encoders as well
Since there was too much push from community that my workflow wonโt work with expressions, I had to take a break from research and use whatever I have
I used my own researched workflow for training with Kohya GUI and also my own self developed SUPIR app batch upscaling with face upscaling and auto LLaVA captioning improvement
Download images to see them in full size, the last provided grid is 50% downscaled
Workflow
Gather a dataset that has expressions and perspectives that you like after training, this is crucial, whatever you add, it can generate perfect
Follow one of the LoRA training tutorials / guides
After training your LoRA, use your favorite UI to generate images
I prefer SwarmUI and here used prompts (you can add specific expressions to prompts) including face inpainting :
https://gist.github.com/FurkanGozukara/ce72861e52806c5ea4e8b9c7f4409672
After generating images, use SUPIR to upscale 2x with maximum resemblance
Short Conclusions
Using 256 images certainly caused more overfitting than necessary
...
Model creating gibberish after certain amount of tokens.
