Tensor Diffusion

community

AI & ML interests

StableDIffusion, Computer Vision, NLP

Recent Activity

tensor-diffusion's activity

NymboΒ 
posted an update 2 days ago
view post
Post
436
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
not-lainΒ 
posted an update about 2 months ago
DamarJatiΒ 
updated a Space 2 months ago
not-lainΒ 
posted an update 3 months ago
not-lainΒ 
posted an update 4 months ago
view post
Post
1705
we now have more than 2000 public AI models using ModelHubMixinπŸ€—
not-lainΒ 
posted an update 4 months ago
view post
Post
4067
Published a new blogpost πŸ“–
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
πŸ”— https://huggingface.co/blog/not-lain/tensor-dims
some interesting takeaways :
DamarJatiΒ 
posted an update 4 months ago
view post
Post
3468
Happy New Year 2025 πŸ€—
For the Huggingface community.
1aurentΒ 
posted an update 4 months ago
not-lainΒ 
posted an update 6 months ago
view post
Post
2377
ever wondered how you can make an API call to a visual-question-answering model without sending an image url πŸ‘€

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
πŸ”— https://github.com/not-lain/loadimg

API request example πŸ› οΈ:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")