The Internet Archive, Wikimedia, academics, and hobby archivists are having trouble finding hard drives or are having to pay extremely high prices for them.
I would bet that a lot of the storage that AI companies are picking up isn’t for the model itself, but for storing the huge amount of information that they want to use as their training corpus.
I’d bet that what they do is something like this:
Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.
Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It’s probably smaller than #1. Probably a lot smaller.
Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I’d assume that this is an input into the training.
The generated model, generated via training.
For the data in stage #1, I’d guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.
But, yeah, for the later stages, tapes probably aren’t gonna work.
I would bet that a lot of the storage that AI companies are picking up isn’t for the model itself, but for storing the huge amount of information that they want to use as their training corpus.
I’d bet that what they do is something like this:
Download data and store in original form, non-destructively. This is probably not used incredibly frequently. When you see bots sucking down the whole Web, this is the sort of thing that is involved.
Have some kind of filtered training corpus. This throws out a lot of stuff that is useless for training. This is generated from #1 by filtering software. It’s probably smaller than #1. Probably a lot smaller.
Probably some sort of scored index is generated at this stage to put an estimate on how useful or reliable the data in step #2 should be considered; I’d assume that this is an input into the training.
The generated model, generated via training.
For the data in stage #1, I’d guess that AI companies might be able to use tapes. That being said, it might make sense to use faster storage if it accelerates the time to iterate on improving the filtering software.
But, yeah, for the later stages, tapes probably aren’t gonna work.