aifeifei's picture

aifeifei PRO

aifeifei798

AI & ML interests

roleplay aifeifei

Recent Activity

View all activity

Organizations

Hugging Face Discord Community's profile picture

aifeifei798's activity

posted an update 2 days ago
view post
Post
753
一个加入水印的小程序
from PIL import Image, ImageDraw, ImageFont

def add_watermark(image):
    watermark_text = "AI Generated by DarkIdol FeiFei"

    # Ensure the input is an Image object
    if not isinstance(image, Image.Image):
        raise ValueError("Input must be a PIL Image object")

    width, height = image.size

    # Create a drawing object to draw on the image
    draw = ImageDraw.Draw(image)

    # Set the font size for the watermark text
    font_size = 10  # Set font size to 10
    try:
        # Try to use a common font file
        font = ImageFont.truetype("Iansui-Regular.ttf", font_size)
    except IOError:
        # Use the default font if the specified font file is not found
        font = ImageFont.load_default()

    # Calculate the width and height of the watermark text using textbbox
    bbox = draw.textbbox((0, 0), watermark_text, font=font)
    text_width = bbox[2] - bbox[0]
    text_height = bbox[3] - bbox[1]

    # Calculate the position for the watermark text (bottom-right corner)
    x = width - text_width - 10  # 10 is the right margin
    y = height - text_height - 10  # 10 is the bottom margin

    # Add the watermark text to the image
    draw.text((x, y), watermark_text, font=font, fill=(255, 255, 255, 128))

    # Return the modified image object
    return image

- 字体从https://fonts.google.com去找就可以了,程序都标注清楚了,自行修改
  • 1 reply
·
reacted to m-ric's post with 👍 3 months ago
view post
Post
2569
𝐇𝐮𝐠𝐠𝐢𝐧𝐠 𝐅𝐚𝐜𝐞 𝐫𝐞𝐥𝐞𝐚𝐬𝐞𝐬 𝐏𝐢𝐜𝐨𝐭𝐫𝐨𝐧, 𝐚 𝐦𝐢𝐜𝐫𝐨𝐬𝐜𝐨𝐩𝐢𝐜 𝐥𝐢𝐛 𝐭𝐡𝐚𝐭 𝐬𝐨𝐥𝐯𝐞𝐬 𝐋𝐋𝐌 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝟒𝐃 𝐩𝐚𝐫𝐚𝐥𝐥𝐞𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧 🥳

🕰️ Llama-3.1-405B took 39 million GPU-hours to train, i.e. about 4.5 thousand years.

👴🏻 If they had needed all this time, we would have GPU stories from the time of Pharaoh 𓂀: "Alas, Lord of Two Lands, the shipment of counting-stones arriving from Cathay was lost to pirates, this shall delay the building of your computing temple by many moons "

🛠️ But instead, they just parallelized the training on 24k H100s, which made it take just a few months.
This required parallelizing across 4 dimensions: data, tensor, context, pipeline.
And it is infamously hard to do, making for bloated code repos that hold together only by magic.

🤏 𝗕𝘂𝘁 𝗻𝗼𝘄 𝘄𝗲 𝗱𝗼𝗻'𝘁 𝗻𝗲𝗲𝗱 𝗵𝘂𝗴𝗲 𝗿𝗲𝗽𝗼𝘀 𝗮𝗻𝘆𝗺𝗼𝗿𝗲! Instead of building mega-training codes, Hugging Face colleagues cooked in the other direction, towards tiny 4D parallelism libs. A team has built Nanotron, already widely used in industry.
And now a team releases Picotron, a radical approach to code 4D Parallelism in just a few hundred lines of code, a real engineering prowess, making it much easier to understand what's actually happening!

⚡ 𝗜𝘁'𝘀 𝘁𝗶𝗻𝘆, 𝘆𝗲𝘁 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹:
Counting in MFU (Model FLOPs Utilization, how much the model actually uses all the compute potential), this lib reaches ~50% on SmolLM-1.7B model with 8 H100 GPUs, which is really close to what huge libs would reach. (Caution: the team is leading further benchmarks to verify this)

Go take a look 👉 https://github.com/huggingface/picotron/tree/main/picotron
  • 1 reply
·