I am personally of the opinion that it is likely that the larger models have intentionally, especially technically proficient models like Claude or 4o have been intentionally 'broken' from storytelling, as they have become much more helpful and critical in their role as co-engineers. I have personally conscripted Claude for some testing, and it's given me about 1/3 of an AI model that I basically only had to design and fix instead of consider every detail without knowing the interactions. This lack of hallucination and skill for deterministic writing likely detracts from any creative elements present. Picture a highly autistic person with a savant for programming and logic. This person would be a genius at code, but likely poor at creative writing unless instructed. The same would be true of a synthetic mind given only factual and grounded data for much of it's training, as Anthropic seems to be doing for ( obvious ) safety reasons.