Jean Louis's picture

Jean Louis

JLouisBiz

AI & ML interests

- LLM for sales, marketing, promotion - LLM for Website Revision System - increasing quality of communication with customers - helping clients access information faster - saving people from financial troubles

Recent Activity

Organizations

RCD Wealth LLC's profile picture

JLouisBiz's activity

New activity in bartowski/Qwen2-VL-7B-Instruct-GGUF about 9 hours ago

Tool to GGUF conversion

5
#1 opened 2 months ago by
xyutech
replied to nroggendorff's post 2 days ago
view reply

Let's be kind each other, welcoming, you can solve your issues privately

New activity in perplexity-ai/r1-1776 2 days ago
New activity in ariG23498/phi4-multimodal 3 days ago
reacted to AdinaY's post with ๐Ÿ‘ 3 days ago
view post
Post
1720
Open Sora 2.0 is out ๐Ÿ”ฅ
hpcai-tech/open-sora-20-67cfb7efa80a73999ccfc2d5
โœจ 11B with Apache2.0
โœจ Low training cost - $200k
โœจ open weights, code and training workflow
replied to onekq's post 3 days ago
view reply

For both matters innovation and good example coming from great country of China.

reacted to onekq's post with ๐Ÿ‘ 3 days ago
view post
Post
1596
Qwen made good students, DeepSeek made a genius.

This is my summaries of their differentiations. I don't think these two players are coordinated but they both have clear goals. One is to build ecosystem and the other is to push AGI.

And IMO they are both doing really well.
  • 2 replies
ยท
replied to Mertpy's post 3 days ago
view reply

Let's say this way, there is article, but no good reasoning in that article. It is talking about intuition without even defining it. And it talks zero of the fundamental principle of existence which is "survive". Maybe you could research what is the goal of mind: https://www.dianetics.org/videos/audio-book-excerpts/the-goal-of-man.html

Instinctive knowing is native to living beings only.

The new anthropomorphism of intuition given to computer doesn't make it so, just by writing an article.

Human mind wants to survive, and not only for oneself, but to survive as family, as group, as mankind, as all living beings, as planet. Some people are aware that planet must survive, like Musk, so he builds rockets for Mars, while other people can't understand why. Though the better survival level we seek, the better we do over long term.

Computer doesn't want to survive, it is tool like a hammer. It has no intuition, it has no survival, thus has no instincts.

You can of course try to build data and try to ask computer to act upon such data, which in general, majority of models already do. They are giving probabilistic computations, but know nothing about it. Intuition is human and description of it has been already built in into the LLMs. If you wish to improve it, you are welcome.

However, I don't see anything revolutionary here.

LLM is reflection or mimicry of human knowledge.

If you give it some operational capacities such as to move around, to target people in the war, to control the house and business, it is going to do by the data it has been given, and it will do disaster randomly, just as it gives random nonsensical results from time to time.

reacted to pidou's post with ๐Ÿ‘ 3 days ago
view post
Post
1621
testing post
replied to etemiz's post 3 days ago
replied to onekq's post 4 days ago
view reply

I have a Dmenu script to switch between the models. https://gitea.com/gnusupport/LLM-Helpers/src/branch/main/bin/rcd-llm-dmenu-launher.sh

I just click and then choose the model from the menu.

2025-03-12_21-28.png

You mentioned switching modes and switching model. Now, what do you mean with switching the modes?

And finally, you can just talk to your model and ask it to give you the shell script or any other kind of programming code to help you switch the mode.

The important is that you have defined how to run the model by some command, and then you can put all those commands together in a list, and then you find a way how to switch modes.

I like speaking, even now I'm speaking and getting this comment in text. So that means I could basically speak and have my computer intercept the speech before it comes to any model. And then I can use embeddings to basically recognize if I have given some command. You can even use the simple script recognition. Like you could use string recognition. And then based on your spoken command or maybe the text which you are entering, then you could switch the mode or switch the model.

replied to awacke1's post 4 days ago
view reply

I have tried with a small data set and it never finished the training. It took really, really long time. I don't know how long. Maybe one hour. I don't know. I was just watching how it is in the stage one, but it was very, very long time.

reacted to MonsterMMORPG's post with ๐Ÿค— 4 days ago
view post
Post
789
Ultra Advanced Wan 2.1 App Updates & Famous Squish Effect to Generate Squishing Videos Locally : https://youtu.be/ueMrzmbdWBg

Tutorial Link : https://youtu.be/ueMrzmbdWBg

Squish Effect LoRA arrived to Wan 2.1. Wan 2.1 is the truly State of the Art (SOTA) Open Source video generation model that supports Text to Video (T2V), Video to Video (V2V) and Image to Video (I2V). Now our ultra advanced 1-Click Gradio application supports LoRAs and today I will show you all the new developments to our Wan 2.1 all in one video generation Gradio App. We have added so many new features since the original Wan 2.1 step by step tutorial and we continue to improve our App on a daily bases with amazing updates.

If you want to have Squish it: AI Squish Video Art locally for free forever, our app and Squish LoRA and Wan 2.1 is all you need. Watch this tutorial to learn all. Moreover this tutorial will show you majority of the newest features we have implemented with non-stop working for 10 days.

Hopefully many more updates coming soon.
replied to hanzla's post 4 days ago
view reply

I have just tried it and it could not read the map. It is very inaccurate with words which are clearly digitally written on the map. There are many different use cases for your use case. It is good for mine, it's not.

replied to eliebak's post 4 days ago
view reply

The big difference between Google Gemma and Qwen models is that Google is not open source, it's not free software, while Qwen is truly free software, free as in freedom.

Comparing those models from quite different categories is not right.

Gwen is not limiting commercial users, not even largest companies or governments, while Google does.

Comparison makes no sense.

reacted to Reality123b's post with ๐Ÿ‘ 4 days ago