Unintended conversion of "NA" and "NULL" to NaN

#1
by DNivalis - opened

In the dataset, the strings "NA" and "NULL" are unintentionally converted to NaN. This is expected behavior in pandas, which the datasets library relies on for parsing CSV files. According to the pandas.read_csv documentation, the following values are interpreted as NaN by default: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.

I’m unsure if there’s a straightforward way to prevent this during parsing. As a workaround, I’ve been converting the data to a DataFrame and applying the following fixes:

df.loc[(df['Number of Letters'] == 4) & (df['Word'].isnull()), 'Word'] = 'NULL'
df.loc[(df['Number of Letters'] == 2) & (df['Word'].isnull()), 'Word'] = 'NA'

This restores "NULL" and "NA" based on the number of letters, but I’d welcome suggestions for a more elegant solution if anyone has ideas!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment