Nanomenta ML

classroom

AI & ML interests

ML for animation, cinema, art

nanomenta's activity

fffiloniΒ 
posted an update 3 months ago
fffiloniΒ 
posted an update 3 months ago
view post
Post
3560
Explain like i'm 5 the last take from @thomwolf on X about Dario's essay on DeepSeek:

β€”β€Ί Open-source AI is like a big cookbook that everyone can read and improve. Instead of a few chefs keeping their recipes secret, anyone can cook, test, and invent new things.

If only one company controls AI, everything stops if they have a problemβ€”like when the internet goes down. With open-source, many people can help, making sure it keeps running smoothly.

AI isn’t just a race between two countries; it’s a team effort around the world. By sharing, we move faster and create safer technology for everyone.
β€”
πŸ€—
fffiloniΒ 
posted an update 6 months ago
fffiloniΒ 
posted an update 7 months ago
view post
Post
20019
Visionary Walter Murch (editor for Francis Ford Coppola), in 1999:

β€œ So let's suppose a technical apotheosis some time in the middle of the 21st century, when it somehow becomes possible for one person to make an entire feature film, with virtual actors. Would this be a good thing?

If the history of oil painting is any guide, the broadest answer would be yes, with the obvious caution to keep a wary eye on the destabilizing effect of following too intently a hermetically personal vision. One need only look at the unraveling of painting or classical music in the 20th century to see the risks.

Let's go even further, and force the issue to its ultimate conclusion by supposing the diabolical invention of a black box that could directly convert a single person's thoughts into a viewable cinematic reality. You would attach a series of electrodes to various points on your skull and simply think the film into existence.

And since we are time-traveling, let us present this hypothetical invention as a Faustian bargain to the future filmmakers of the 21st century. If this box were offered by some mysterious cloaked figure in exchange for your eternal soul, would you take it?

The kind of filmmakers who would accept, even leap, at the offer are driven by the desire to see their own vision on screen in as pure a form as possible. They accept present levels of collaboration as the evil necessary to achieve this vision. Alfred Hitchcock, I imagine, would be one of them, judging from his description of the creative process: "The film is already made in my head before we start shooting."”
β€”
Read "A Digital Cinema of the Mind? Could Be" by Walter Murch: https://archive.nytimes.com/www.nytimes.com/library/film/050299future-film.html

  • 1 reply
Β·
fffiloniΒ 
posted an update 12 months ago
view post
Post
19549
πŸ‡«πŸ‡·
Quel impact de l’IA sur les filiΓ¨res du cinΓ©ma, de l’audiovisuel et du jeu vidΓ©o?
Etude prospective Γ  destination des professionnels
β€” CNC & BearingPoint | 09/04/2024

Si l’Intelligence Artificielle (IA) est utilisΓ©e de longue date dans les secteurs du cinΓ©ma, de l’audiovisuel et du jeu vidΓ©o, les nouvelles applications de l’IA gΓ©nΓ©rative bousculent notre vision de ce dont est capable une machine et possΓ¨dent un potentiel de transformation inΓ©dit. Elles impressionnent par la qualitΓ© de leurs productions et suscitent par consΓ©quent de nombreux dΓ©bats, entre attentes et apprΓ©hensions.

Le CNC a donc dΓ©cider de lancer un nouvel Observatoire de l’IA Afin de mieux comprendre les usages de l’IA et ses impacts rΓ©els sur la filiΓ¨re de l’image. Dans le cadre de cet Observatoire, le CNC a souhaitΓ© dresser un premier Γ©tat des lieux Γ  travers la cartographie des usages actuels ou potentiels de l’IA Γ  chaque Γ©tape du processus de crΓ©ation et de diffusion d’une Ε“uvre, en identifiant les opportunitΓ©s et risques associΓ©s, notamment en termes de mΓ©tiers et d’emploi. Cette Γ©tude CNC / Bearing Point en a prΓ©sentΓ© les principaux enseignements le 6 mars, lors de la journΓ©e CNC Β« CrΓ©er, produire, diffuser Γ  l’heure de l’intelligence artificielle Β».

Le CNC publie la version augmentΓ©e de la cartographie des usages de l’IA dans les filiΓ¨res du cinΓ©ma, de l’audiovisuel et du jeu vidΓ©o.

Lien vers la cartographie complète: https://www.cnc.fr/documents/36995/2097582/Cartographie+des+usages+IA_rapport+complet.pdf/96532829-747e-b85e-c74b-af313072cab7?t=1712309387891
Β·
fffiloniΒ 
updated a Space 12 months ago
fffiloniΒ 
posted an update about 1 year ago
view post
Post
"The principle of explainability of ai and its application in organizations"
Louis Vuarin, VΓ©ronique Steyer
β€”β€ΊΒ πŸ“” https://doi.org/10.3917/res.240.0179

ABSTRACT: The explainability of Artificial Intelligence (AI) is cited in the literature as a pillar of AI ethics, yet few studies explore its organizational reality. This study proposes to remedy this shortcoming, based on interviews with actors in charge of designing and implementing AI in 17 organizations. Our results highlight: the massive substitution of explainability by the emphasis on performance indicators; the substitution of the requirement of understanding by a requirement of accountability; and the ambiguous place of industry experts within design processes, where they are employed to validate the apparent coherence of β€˜black-box’ algorithms rather than to open and understand them. In organizational practice, explainability thus appears sufficiently undefined to reconcile contradictory injunctions. Comparing prescriptions in the literature and practices in the field, we discuss the risk of crystallizing these organizational issues via the standardization of management tools used as part of (or instead of) AI explainability.

Vuarin, Louis, et VΓ©ronique Steyer. Β« Le principe d’explicabilitΓ© de l’IA et son application dans les organisations Β», RΓ©seaux, vol. 240, no. 4, 2023, pp. 179-210.

#ArtificialIntelligence #AIEthics #Explainability #Accountability
fffiloniΒ 
posted an update over 1 year ago
view post
Post
I'm happy to announce that ✨ Image to Music v2 ✨ is ready for you to try and i hope you'll like it too ! 😌

This new version has been crafted with transparency in mind,
so you can understand the process of translating an image to a musical equivalent.

How does it works under the hood ? πŸ€”

First, we get a very literal caption from microsoft/kosmos-2-patch14-224; this caption is then given to a LLM Agent (currently HuggingFaceH4/zephyr-7b-beta )which task is to translate the image caption to a musical and inspirational prompt for the next step.

Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice:
MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango

Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.

Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step.

Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one πŸ‘Œ

Try it, explore different models and tell me which one is your favorite πŸ€—
β€”β€Ί fffiloni/image-to-music-v2
Β·
fffiloniΒ 
posted an update over 1 year ago
view post
Post
InstantID-2V is out ! ✨

It's like InstantID, but you get a video instead. Nothing crazy here, it's simply a shortcut between two demos.

Let's see how it does work with gradio API:

1. We call InstantX/InstantID with a conditional pose from cinematic camera shot (example provided in the demo)
2. Then we send the previous generated image to ali-vilab/i2vgen-xl

Et voilΓ  πŸ€— Try it : fffiloni/InstantID-2V

β€”
Note that generation can be quite long, so take the opportunity to brew you some coffee 😌
If you want to skip the queue, you can of course reproduce this pipeline manually
  • 1 reply
Β·
fffiloniΒ 
posted an update over 1 year ago
view post
Post
Quick build of the day: LCM Supa Fast Image Variation
β€”
We take the opportunity to combine moondream1 vision and LCM SDXL fast abilities to generate a variation from the subject of the image input.
All that thanks to gradio APIs πŸ€—

Try the space: https://huggingface.co/spaces/fffiloni/lcm-img-variations
Β·
fffiloniΒ 
posted an update over 1 year ago
view post
Post
Just published a quick community blog post mainly aimed at Art and Design students, but which is also an attempt to nudge AI researchers who would like to better consider benefits from collaboration with designers and artists πŸ˜‰
Feel free to share your thoughts !

"Breaking Barriers: The Critical Role of Art and Design in Advancing AI Capabilities" πŸ“„ https://huggingface.co/blog/fffiloni/the-critical-role-of-art-and-design-in-advancing-a

β€”
This short publication follows the results of two AI Workshops that took place at Γ‰cole des Arts DΓ©coratifs - Paris, lead by Etienne Mineur, Vadim Bernard, Martin de Bie, Antoine Pintout & Sylvain Filoni.
Β·
fffiloniΒ 
posted an update over 1 year ago
view post
Post
I just published a Gradio demo for AliBaba's DreamTalk πŸ€—

Try it now: fffiloni/dreamtalk
Paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models (2312.09767)
β€”
DreamTalk is a diffusion-based audio-driven expressive talking head generation framework that can produce high-quality talking head videos across diverse speaking styles. DreamTalk exhibits robust performance with a diverse array of inputs, including songs, speech in multiple languages, noisy audio, and out-of-domain portraits.
fffiloniΒ 
posted an update over 1 year ago
view post
Post
just setting up my new hf social posts account feature πŸ€—
  • 1 reply
Β·