Some feedback

#3
by asigalov61 - opened

@loubb Hey guys!

First of all, thank you very much for this dataset! It's very nice and it will be very useful to me for my MIR and AI research :)

I've sampled and reviewed the dataset, and I wanted to give you some feedback about it.

  1. While the dataset is great for MIR and music analysis, IMHO its not very suitable for symbolic music AI models creation/pre-training. I've trained a test model using the full version of the dataset (aria-midi-v1-ext) but the results were somewhat disappointing. The model did not learn well from the dataset and it couldn't generate good continuations of the arbitrary seed composition. Please see attached training loss/acc graphs and tokens embeddings plot.

Performance-Piano-Transformer-Tokens-Embeddings-Plot.png
training_acc_graph.png
training_loss_graph.png

  1. While I commend you on creating your own transcription model and pipeline, the quality of transcription could use some improvement. IMHO, it's a little bit choppy and rough compared to i.e ByteDance transcription model and the implementation of the transcription model is a bit bulky and comlex. In fact, I was unable to try your transcription model (Aria-AMT) due to issues with aria-utils and other errors.

  2. It would be great if you would consider creating a processed version of your dataset with bar alignment, without bad notes/chords and other normalizations.

Overall, I've enjoyed working with the dataset anyway and regardless of the problems I think it is still very useful for MIR and music analysis tasks.

Hope this feedback is somehow helpful and I can elaborate more if needed.

Sincerely,

Alex

Hi Alex!

To address some of your concerns:

not very suitable for symbolic music AI models creation/pre-training...

Sorry to hear this! I would suggest trying the aria-midi-pruned-ext.tar.gz split, which is filtered for generative modeling applications. I'll make this clearer in the README. My collaborators and I are quite pleased with the follow-up work we have done on generative modeling and representation learning -- I hope you can get similar results!

the quality of transcription could use some improvement.

Our transcription model outperforms Kong et al. (ByteDance) on all transcription benchmarks, so it's unlikely that this is responsible for issues with MIDI quality. Rather, the source recording may not be played well. If you wish to filter by this, we have included audio_score metadata for each file, which you could use as a proxy measure to filter lower-quality recordings (and therefore MIDI transcriptions).

In fact, I was unable to try your transcription model (Aria-AMT) due to issues with aria-utils and other errors.

Please submit an issue on the GitHub page. I've just tried installing from scratch on a couple of different machines and haven't had any issues. It could be that you are not using a supported operating system (only Linux is supported).

It would be great if you would consider creating a processed version of your dataset with bar alignment, without bad notes/chords and other normalizations.

This is not currently on our roadmap unfortunately!

Thanks for the feedback

@loubb Thank you very much for your prompt response and guidance. I did not mean to be overtly critical :)

I will definitely consider trying pruned version of the dataset. Thank you for this clarification.

I have a few more questions if you do not mind...

  1. Can you briefly describe your model architecture and tokenization scheme which produced good results? I use x-transformers by lucidrains / decoder only with RoPE and custom (asymmetrical) tokenization without velocities (only delta start times, durations and pitches) which always produced good results for me.

  2. Do you get good results with your Aria dataset and models for an arbitrary seed composition continuation? From my experience, model can generate well but it may not continue well, which is IMHO is the best way to assess model abilities and performance.

  3. RE: Aria-AMT: I tried to use it on ARM GH200 linux instance with 2.6 torch. Do you think that ARM may be the problem here? I will try it again on Intel and H100 instance and I will submit an issue if it still does not work.

To clarify my original feedback... From my experience, auto transcribed or otherwise recorded performance piano datasets (i.e MAESTRO or GiantMIDI or ATEPP) usually produce inferior results (higher loss, lower acc) and worse continuation performance. My solution was to dilute such datasets with normalized and high-quality aligned data but that does not always work either.

Also, while you may get good scores and metrics of the transcriptions, it does not always mean that the transcription will sound good to a human listener.

Let me know please what you think.

Sincerely,

Alex

Hi! No worries at all, we appreciate the feedback : )

We can't release the preprint for our generative work yet due to anonymity requirements, however, I can give some details. We used a standard transformer decoder architecture with typical modifications, very similar to the Llama 3 family. We designed a bespoke MIDI tokenizer, which is quite close in design to the one used for MuseNet. Generally, we found that the model produces coherent continuations, however, this is of course subjective, and the model is by no means perfect. Our research aimed to model piano performance, so using datasets like MAESTRO / GiantMIDI / ATEPP / Aria-MIDI are the only options. Our dataset is not really appropriate for modeling multi-track or quantized (e.g., ABC/Music-XML) symbolic music, if that is where your research interests lie.

Here are some autoregressive continuations from our generative model, so you can judge for yourself.

Generation

Prompt

Generation

Prompt

RE: Aria-AMT: I tried to use it on ARM GH200 linux instance with 2.6 torch. Do you think that ARM may be the problem here? I will try it again on Intel and H100 instance and I will submit an issue if it still does not work.

I'd recommend creating a new virtual environment (conda or otherwise) and trying to install from scratch by following the instructions in the README, and testing using the CLI script also in the README. If this fails, please make a GitHub issue so I can address any installation problems. I've not tested it on ARM, however, I'd assumed this wouldn't be an issue as long as the dependencies (torch, torchaudio, librosa, etc.) support ARM.

@loubb Thank you for your response and for the samples :)

The samples are very nice, and they give me hope that the problem is just on my end, not with your dataset. I apologize for jumping to conclusions earlier about it :)

My music AI research is also focused on performance Piano and multi-instrumental music, which is why I was happy to see your dataset, and which is why I jumped on it right away.

I am primarily interested in creating models that can continue seed compositions (aka prompts continuations) in a stable and coherent way with very smooth transitions. This is how I usually judge the quality and abilities of mine and similar models.

I am going to close this thread for now because I do not have any more feedback or questions at this time. And I will open an issue on GitHub for Aria-AMT if it does not work on Intel instance which I will try right now.

Thank you very much for your time and responses :)

Congrats on your dataset release and work :)

Sincerely,

Alex

asigalov61 changed discussion status to closed

@loubb Just wanted to give you a little update and some more feedback about dataset performance for pre-training:

I trained another model on pruned version as you have suggested. The results were slightly better but not by much.

Loss: ~1.5
Acc: ~0.59

Model seq_len: 4096
Model dic_size: 640

I get significantly better results on my dataset, which is roughly the same size and its mixed performance/score piano music (loss ~0.5 and acc ~0.78)

So wanted to ask you if you think that this maybe due to seq_len of 4096?
What seq_len you used in your model and what loss/acc did you get?

I hope you can share this info so that I can figure out what is going on here :)

Thank you.

Alex

Hi, not totally sure what you mean by loss and accuracy here, but if that's next-token prediction on a held-out set, then I'd expect a model trained on piano transcriptions to perform worse in absolute terms, even in the case that you are keeping the validation set consistent between experiments.

AFAIK losses aren't really directly comparable across datasets for generative modelling (e.g. see most foundation model NLP lit). Higher loss just reflects a more complex or diverse data distribution, not necessarily worse modelling.

Here is an outline of the pre-training procedure we used, taken from our pre-print:

Model Architecture

Our model architecture builds upon the LLaMa 3.2 model family, chosen due to their effectiveness in autoregressive tasks across modalities (Grattafiori et al., 2024). Using the 1B parameter configuration as a starting point, we make several architectural modifications. Firstly, informed by results of a preliminary hyperparameter sweep, we reduced the hidden-state dimension (d_model) from 2048 to 1536. This decreased the parameter count by roughly half, without compromising validation loss. Secondly, we simplified the architecture by opting for standard multi-head attention (with 24 heads) and layer normalization (Vaswani et al., 2017; Ba et al., 2016), instead of group-query attention and RMS normalization as used in standard LLaMa 3 variants (Ainslie et al., 2023; Zhang et al., 2019).

Pre-training recipe

We pre-train our model using standard next-token prediction on concatenated sequences of tokenized MIDI files, as detailed in Section 3. A sequence length of 8192 tokens was chosen to balance computational constraints with the need to learn meaningful short and long-term dependencies within piano music. To enhance generalization and prevent overfitting, we implement online data augmentation, randomly varying pitch (±5 semitones), tempo (±20%), and MIDI velocity (±10).

Setup

We pre-train our model using the AdamW optimizer for 75 epochs over the training corpus. We use a learning rate of 3e-4 with 1000 warmup steps, followed by a linear decay to 10% of the initial rate over the course of training. The model has approximately 650 million parameters and was pre-trained for 9 days on 8 H100 GPUs with a batch size of 16 per GPU.

Louis

@loubb Thank you very much for your detailed response, Louis! This is really helpful, and I really appreciate it!

While I totally agree with you about val loss being worse when training on a solo performance piano music, I think it is still a good indicator of model performance in relative terms.

I am an independent MIR and symbolic music AI researcher, so my models and techniques are much less advanced than what you guys use, and my resources do not allow me to train large-scale models either, which may be the main issue here.

I would be really interested in seeing your final model and paper on this subject when you will publish it :)

In regard to the dataset... I ran another test with diluted version of Aria pruned and I also reduced dictionary size (640 -> 384) and now I am getting "normal"/good results with it (80% accuracy) with my model setup.

So, I think the problem is that my model is not sufficiently large and not sufficiently trained to capture/handle solo piano performance music on a large scale. This is good to know because I was worried that it might've been something else.

Anyway, thank you again very much for your help and time.

Sincerely,

Alex

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment