SAN stands for Storage Area Network. SAN storage is usually reserved for Enterprise environments, meaning companies that can afford $250,000 or more for their harddrive storage. There are many benefits for SAN storage. These are redundancy, high IOPS, and high bandwidth. Of course cost is the largest drawback, because you need some sort of storage only network to use the SAN. This network is usually Fibre Channel, but it can be an IP based network connection and best results should use link aggregation ( LACP IEEE 802.1AX) which means you need a switch that supports 802.1AX and you need multiple network ports on your computers. So, not only is SAN storage expensive to buy, it is expensive to support.
So why use SAN storage? One word: Performance. A single high performance 14TB Seagate HDD can provide data to the system at up to 250MB/s, and allows 69 IOPS (Input/output Operations Per Second) for reads and 222 IOPS for writes, and have a MTBF (Mean Time Before Failure) of 1,200,000 hours. So for a single threaded piece of software, this will work well. But, as soon as you start multitasking, doing disk intensive activity, and have thousands of hours of intellectual property tied up in your hard drives, you need more than that. Current high end IBM DS8888 SAN device will have up to 2,500 KIOPS and up to 58GB/s transfer speeds. In addition, these systems can handle multiple simultaneous drive failures.
IOPS provide information on how many times per second a device can read or write data, the higher this number is, the better. Throughput provides information on how much data per second can be sent by the device. MTBF indicates, theoretically, how long the average drive will last before failure and a higher number is better.
If your business model relies on HDD’s you need a balance between performance, reliability, and price. If you are a small business, with three or four employees/partners, the chances are slim that you can afford an IBM DS8888 and have someone who could maintain the environment. Is there a system that can provide similar performance and reliability of a SAN, but with the ease of maintenance and low cost of consumer HDD’s? I think there are a few options . You can use a local RAID 10 array which essentially does a 4 to 1 reduction in storage, shares the workload amongst 2 arrays and can accommodate up to 2 failures. A good idea would be to have a spare drive handy in case of failure. This is your cheapest option with the least amount of administration, and can be done in software with Windows desktop, Mac OS, and Linux, as well, nearly all RAID controllers offer it as an option. The other option requires some advanced Linux administration skills but could provide very high reliability and performance. Using a combination of RAM, SSD, and HDD, along with Oracle’s ZFS file systems all running on a Linux server, you might be able to create a solid and reliable high performance tiered solution.