The Aya Vision 8B and 32B models is a state-of-the-art multilingual multimodal models developed by Cohere For AI. They build on the Aya Expanse recipe to handle both visual and textual information without compromising on the strong multilingual textual performance of the original model.
Aya Vision 8B combines the Siglip2-so400-384-14
vision encoder with the Cohere CommandR-7B language model further post-trained with the Aya Expanse recipe, creating a powerful vision-language model capable of understanding images and generating text across 23 languages. Whereas, Aya Vision 32B uses Aya Expanse 32B as the language model.
Key features of Aya Vision include:
Tips:
<image>
tag in the templated input.apply_chat_template
method of the processor to format your inputs correctly.This model was contributed by saurabhdash and yonigozlan.
Here’s how to use Aya Vision for inference:
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
model_id = "CohereForAI/aya-vision-8b"
torch_device = "cuda:0"
# Use fast image processor
processor = AutoProcessor.from_pretrained(model_id, use_fast=True)
model = AutoModelForImageTextToText.from_pretrained(
model_id, device_map=torch_device, torch_dtype=torch.float16
)
# Format message with the aya-vision chat template
messages = [
{"role": "user",
"content": [
{"type": "image", "url": "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium"},
{"type": "text", "text": "चित्र में लिखा पाठ क्या कहता है?"},
]},
]
# Process image on CUDA
inputs = processor.apply_chat_template(
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", device=torch_device
).to(model.device)
gen_tokens = model.generate(
**inputs,
max_new_tokens=300,
do_sample=True,
temperature=0.3,
)
gen_text = print(processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
from transformers import pipeline
pipe = pipeline(model="CohereForAI/aya-vision-8b", task="image-text-to-text", device_map="auto")
# Format message with the aya-vision chat template
messages = [
{"role": "user",
"content": [
{"type": "image", "url": "https://media.istockphoto.com/id/458012057/photo/istanbul-turkey.jpg?s=612x612&w=0&k=20&c=qogAOVvkpfUyqLUMr_XJQyq-HkACXyYUSZbKhBlPrxo="},
{"type": "text", "text": "Bu resimde hangi anıt gösterilmektedir?"},
]},
]
outputs = pipe(text=messages, max_new_tokens=300, return_full_text=False)
print(outputs)
Aya Vision can process multiple images in a single conversation. Here’s how to use it with multiple images:
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
model_id = "CohereForAI/aya-vision-8b"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(
model_id, device_map="cuda:0", torch_dtype=torch.float16
)
# Example with multiple images in a single message
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg",
},
{
"type": "image",
"url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg",
},
{
"type": "text",
"text": "These images depict two different landmarks. Can you identify them?",
},
],
},
]
inputs = processor.apply_chat_template(
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt"
).to(model.device)
gen_tokens = model.generate(
**inputs,
max_new_tokens=300,
do_sample=True,
temperature=0.3,
)
gen_text = processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(gen_text)
For processing batched inputs (multiple conversations at once):
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
model_id = "CohereForAI/aya-vision-8b"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(
model_id, device_map="cuda:0", torch_dtype=torch.float16
)
# Prepare two different conversations
batch_messages = [
# First conversation with a single image
[
{
"role": "user",
"content": [
{"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
{"type": "text", "text": "Write a haiku for this image"},
],
},
],
# Second conversation with multiple images
[
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg",
},
{
"type": "image",
"url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg",
},
{
"type": "text",
"text": "These images depict two different landmarks. Can you identify them?",
},
],
},
],
]
# Process each conversation separately and combine into a batch
batch_inputs = processor.apply_chat_template(
batch_messages,
padding=True,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
# Generate responses for the batch
batch_outputs = model.generate(
**batch_inputs,
max_new_tokens=300,
do_sample=True,
temperature=0.3,
)
# Decode the generated responses
for i, output in enumerate(batch_outputs):
response = processor.tokenizer.decode(
output[batch_inputs.input_ids.shape[1]:],
skip_special_tokens=True
)
print(f"Response {i+1}:\n{response}\n")
( image_processor = None tokenizer = None patch_size: int = 28 img_size: int = 364 vision_feature_select_strategy = 'full' image_token = '<image>' downsample_factor: int = 1 start_of_img_token = '<|START_OF_IMG|>' end_of_img_token = '<|END_OF_IMG|>' img_patch_token = '<|IMG_PATCH|>' img_line_break_token = '<|IMG_LINE_BREAK|>' tile_token = 'TILE' tile_global_token = 'TILE_GLOBAL' chat_template = None **kwargs )
Parameters
PreTrainedTokenizer
, PreTrainedTokenizerFast
], optional) —
The tokenizer is a required input. int
, optional, defaults to 28) —
The size of image patches for tokenization. int
, optional, defaults to 364) —
The size of the image to be tokenized. This should correspond to the size given to the image processor. str
, optional, defaults to "full"
) —
The feature selection strategy used to select the vision feature from the vision backbone. str
, optional, defaults to "<image>"
) —
The token to be used to represent an image in the text. int
, optional, defaults to 1) —
The factor by which to scale the patch size. str
, optional, defaults to "<|START_OF_IMG|>"
) —
The token to be used to represent the start of an image in the text. str
, optional, defaults to "<|END_OF_IMG|>"
) —
The token to be used to represent the end of an image in the text. str
, optional, defaults to "<|IMG_PATCH|>"
) —
The token to be used to represent an image patch in the text. str
, optional, defaults to "<|IMG_LINE_BREAK|>"
) —
The token to be used to represent a line break in the text. str
, optional, defaults to "TILE"
) —
The token to be used to represent an image patch in the text. str
, optional, defaults to "TILE_GLOBAL"
) —
The token to be used to represent the cover image in the text. str
, optional) — A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string. Constructs a AyaVision processor which wraps a AutoImageProcessor and
PretrainedTokenizerFast
tokenizer into a single processor that inherits both the image processor and
tokenizer functionalities. See the __call__()
and decode() for more information.
This method forwards all its arguments to PreTrainedTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to PreTrainedTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
( vision_config = None text_config = None vision_feature_select_strategy = 'full' vision_feature_layer = -1 downsample_factor = 2 adapter_layer_norm_eps = 1e-06 image_token_index = 255036 **kwargs )
Parameters
Union[AutoConfig, dict]
, optional, defaults to CLIPVisionConfig
) —
The config object or dictionary of the vision backbone. Union[AutoConfig, dict]
, optional, defaults to LlamaConfig
) —
The config object or dictionary of the text backbone. str
, optional, defaults to "full"
) —
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of "default"
or "full"
. If "default"
, the CLS token is removed from the vision features.
If "full"
, the full vision features are used. int
, optional, defaults to -1) —
The index of the layer to select the vision feature. int
, optional, defaults to 2) —
The downsample factor to apply to the vision features. float
, optional, defaults to 1e-06) —
The epsilon value used for layer normalization in the adapter. int
, optional, defaults to 255036) —
The image token index to encode the image prompt. This is the configuration class to store the configuration of a AyaVisionForConditionalGeneration. It is used to instantiate an AyaVision model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of AyaVision. e.g. CohereForAI/aya-vision-8b
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
( config: AyaVisionConfig )
Parameters
AyaVisionVisionConfig
) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights. The AyaVision model which consists of a vision backbone and a language model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: LongTensor = None pixel_values: FloatTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None vision_feature_layer: typing.Optional[int] = None vision_feature_select_strategy: typing.Optional[str] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None last_cache_position: int = 0 num_logits_to_keep: int = 0 **lm_kwargs )
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/main/en/model_doc/auto#transformers.AutoImageProcessor). See [GotOcr2ImageProcessor.__call__()](/docs/transformers/main/en/model_doc/vilt#transformers.ViltFeatureExtractor.__call__) for details.
CohereProcessor` uses
GotOcr2ImageProcessor for processing images. torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
. What are position IDs? tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) —
Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that
don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all
decoder_input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. int
, optional, defaults to -2) —
The index of the layer to select the vision feature. str
, optional, defaults to "default"
) —
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of "default"
or "full"
. If "default"
, the CLS token is removed from the vision features.
If "full"
, the full vision features are used. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
num_logits_to_keep (int
, optional):
Calculate logits for the last num_logits_to_keep
tokens. If 0
, calculate logits for all
input_ids
(special case). Only last token logits are needed for generation, and calculating them only for that
token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
The AyaVisionForConditionalGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.