06-27-2022, 02:27 AM
I've been working on importing large fakebook files with the publicly available CSV indexes. I'm finding that some of the indexes aren't as good as others and I ended up with a few hundred "bad" songs in my list (image attached). The bad song entries point to the wrong part of the pdf file, usually the first page. My thought was to simply remove the bad batch and re-import. The issue I'm having is that I don't see a way to identify just the "bad" songs. I don't have access to any metadata that I could use to identify the bad batch (i.e. import date, batch id, etc.) If I could do that, I could bulk delete a result set.
I know that I could delete my whole library and rebuild it, but I'm hoping there's a better way. I already did that, forgetting that I would lose my annotations. Lesson learned. I wrote a tiny bit of python to fix formatting and integrity in any CSVs that I'm trying to import and that looks like a good way forward, but I'm hoping to learn how to elegantly recover from a bad import should it happen again.
I'm using MSP on Windows.
Thanks,
Matt Trimboli
I know that I could delete my whole library and rebuild it, but I'm hoping there's a better way. I already did that, forgetting that I would lose my annotations. Lesson learned. I wrote a tiny bit of python to fix formatting and integrity in any CSVs that I'm trying to import and that looks like a good way forward, but I'm hoping to learn how to elegantly recover from a bad import should it happen again.
I'm using MSP on Windows.
Thanks,
Matt Trimboli