Add chat_template to tokenizer_config
#58
by
alexmarques
- opened
This fix allows certain LLM evaluation harnesses (e.g., lm-evaluation-harness) to use the tokenizer instead of the processor to tokenize the inputs.
why not use the same one in 2501: https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501/blob/main/tokenizer_config.json
The 3.1 model supports image inputs and that needs to be accounted for in the chat template.