Text Generation
Transformers
PyTorch
chatts
feature-extraction
conversational
custom_code

Update processing_qwen2_ts.py to allow text-only processing

#6

This change is required for two purposes:

  1. Text-only inference (no timeseries input).
  2. vLLM v1 engine with prompt caching. In this case vLLM engine processes text and multimodal parts of the prompt separately.
bytedance-research org

Thank you for fixing this!

xiezhe24 changed pull request status to merged
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment