update modeling_baichuan.py for torchscript mode with past_kv

#30

to enable model inference with use_cache and return_dict from model.config.

Ready to merge
This branch is ready to get merged automatically.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment