Well, personally i'm deeply torn on the subject.
On one hand, you've got Google claiming their liquid nitrogen baby can finally perform, "stable enough", Quantum Error Correction to begin working on actual math problems and taking advantage of the immense cross-connectivity and data bandwidth. And they're absolutely correct that if the QEC is good enough it can obviously enhance the speed of the LLM.
On the other hand... I absolutely adore the fact that these tools are SO open and SO portable that I can quite literally create the entire AI model myself from scratch on my desktop if I want to and have lots of time. My biggest concern, is that very few, if any, private citizens are going to be capable of maintaining a liquid-nitrogen-cooled-quantum-mainframe in their basement. It's not just the stereotype, "nerd virtual girlfriend", type uses i'm concerned about either. How many datasets on this very website would have been utterly impossible if everyone had to queue up for supercomputer time every. Single. Training. Loop.
So naturally i'm highly concerned, as should we all be, that relying on quantum computing for anything other than the most onerous and resource-hog-like will end up dooming hobbyist AI and cause us quite a few problems down the road when corporations realize the sheer scope of psychological warfare they can inflict on their customers at will to make them more profitable and helpless.
TL;DR, Quantum AI has lots of potential for good and bad, but we all, as a open source-first community must focus on what we can improve, maintain, and sustain with our own equipment first.