are the files being written sequentially, or in mass batches all at once? Also, when you say "long term stability" do you need to access those same files continually throughout those same 5 years, or can you offload those to backup as they are processed.
You have a couple of options here, and there are going to be tradeoffs in terms of speed and price whichever route you go.
At the top of the heap, you have things like the Texas Memory systems RamSAN http://www.ramsan.com/
Which essentially employes a boatload of solid state drives in a fiberchannel Storage Array Network (SAN) attached array. It isn't going to get any faster with publically available technology at the moment then that or one of the FusionIO type systems. You are looking at a minimum of $20,000 to get into one of their small servers, plus a fiberchannel attached server ~$7,000ish (something like a HP dl580 G8 server), and possibly a fiberchannel switch ~$2,000 - ~8,000, depending on options. You're going to want a multiple core server with as many processors and cores as you can possibly cram in it, and a fair amount of very high speed ram. Solid State operating system drives in the server too.
If the cost of a small luxury car is not in the budget, you can build a fairly fast disk array on a desktop class system. Something based on a fusion-IO ioDrive ioFX for example (http://www.solidstateworks.com/)
would run about $2,000 for the drive + machine costs ($2,000 - $3,000 depending on feature set). A server chassis would probably still give you better throughput with multiple processor cores than a desktop would.
If you need to go low budget, simple solid state drives would probalby work. you want the fastest one available, with at least a 6GB / second SATA III controller. A Single Level Cache or SLC is going to be faster than the MLC chips, but much more expensive.
You also want to think about backup windows and hardware redundancy. With the really expensive SAN approach, drive failures are accounted for in the chassis, and can usually be replaced without even shutting down. An enterprise class backup solution of some sort like ArcServ R16 with the open file agent could be configured to back your files up nightly, weekely, or monthly, etc without impacting operations on the ones that were open. In the desktop solution, you're more looking at either running drives in a raid 1 or raid 5 array for hardware redundancy, and then installing a secondary drive array to push backups to as needed. then off to tape.
Without knowing an approximate budget, I can't really give you a good idea of the best possible approach, but that should at least give you a few things to think over. There are "Build your own SAN" options out there, and lots of drives is going to tend to be better suited for lots of little file writes. For small files in all cases a solid state drive is going to be better for throughput these days. In some cases for large files, a 15,000 RPM rotational drive in a raid 0 array might have a faster write rate.
If possible, bring a few different solutions in and then run benchmarking on them using hard drive specific benchmark utilities such as attodisk. Here are a few individual drive results. http://www.harddrivebenchmark.net/high_end_drives.html