Storage Hardware

Network storage is a must in machine learning projects, especially when datasets are petabytes in size. The ideal storage solution for any project is to design and size it in a way that allows users to start small and scale it over time. Commodity JBOD systems are an excellent way to start because they come with all the basic components needed to start small and scale later. 

There are three choices available for storage: 1) cloud 2) brands like Dell, HPE, Nutanix, etc. 3) and DIY (do-it-yourself). We’re big believers in the DIY model that uses commodity hardware and open-source software. Let’s start off discussing the enterprise storage market, then discuss commodity hardware.   

Enterprise Storage Market

According to IDC, the “Worldwide Enterprise External OEM Storage Market” revenue in Q2-2021 was $6.9B and vendors shipped 22.1 exabytes of storage capacity. In addition, original design manufacturers sold $6.4B of storage directly to the hyperscalers and shipped 88.7 exabytes of storage capacity. Finally, Server-based Storage shipped in the same period was 45.3 exabytes. In total, 156.1 exabytes shipped in Q2 of storage capacity. Market share per vendor was as follows:

  • First: Dell at 26.8% of WW revenue
  • Second: HPE at 10.9%
  • Third: NetApp at 9.9% and Huawei at 8.9%
  • Fourth: Hitachi at 4.9% and IBM at 4.7% and Pure Storage at 4.1%

Brand name enterprise storage has its place in the market. In mid-to-large enterprise companies where hundreds or thousands of employees need to access data to do their job, the brand name is the way to go. Or if storage is being used to house medical records, financials, accounting, regulated industry data, or other critical information, brand name storage makes the most sense. Thus, when things go wrong and they will, vendor support can come to the rescue. Internet infrastructure and point solutions for supporting machine learning is another question. Commodity hardware and open source work well in those settings. 

There are a few caveats in going with a brand name. First, there is vendor lock-in. Second, storage is very expensive. Entry-level prices for basic storage systems start off at $40,000 and can go to $100,000 quickly by adding dozens of TBs. According to one commenter, we don’t know if it’s true, a 960GB SSD drive costs $3,500 from one vendor, and another vendor is charging $1,000.

Here is some pricing we found on public-facing websites for brand name storage.

  • Dell PowerVault ME4012 Storage Array with 6×2.4TB (14TB) SAS drives 10k RPM and 3 year support next day: $13,869 with 2.4TB drive costing $663. For DAS and SAN
  • Dell PowerVault ME4084 Storage Array with 28x12TB HDD (336TB ) SAS drive 7.2k RPM and 3 year support next day: $51,599 with each 12TB drive costing $1,053. For DAS and SAN

DAS, NAS, and SAN

The three types of storage in the enterprise market that have been around for decades are Direct Attached Storage, Storage Area Network, and Network Attached Storage. For some, these acronyms cause some confusion. DAS is just storage attached to the computer. It can be an external storage system connected to one computer or a USB drive. It’s simple to set up but storage capacity is limited. This is a no-go for any machine learning project in an enterprise setting. At some point in time, multiple servers will need to interact with the storage system, so it must be able to scale to petabytes of storage.

NAS is a network file server that connects to a network and provides access to multiple end-users across a local area network. It can store media files, video, documents, CAD designs, and any other asset. Enterprise NAS solutions come with RAID built-in and are highly available. To computers on the network, NAS is seen as another folder. NetApp is the most popular name in the NAS space. 

SANs are the most complex of the bunch. Right off the back, they are expensive, sometimes costing upwards of a million dollars. However, when designed properly they are fully redundant, highly available, and highly performant. SANs are prevalent in large corporate environments. In summary, they are comprised of multiple storage systems connected via fiber channels (fabric). Storage arrays have fiber channel HBA (host bus adapters) which are connected to multiple fiber channel switches. Should any part fail, including a fiber channel switch, the system will continue to run. 

Commodity Hardware

DAS, NAS, and SAN have their place in the corporate environment. In most large enterprises, all three are being used extensively, with each one serving the needs of a particular group. When it comes to machine learning, which one is best, DAS, NAS, and SAN? The answer is none. The storage needed for machine learning is in a whole different ball game. If we had to pick, it would be like NAS and SAN combined into one but that’s not a proper description. 

What is needed for machine learning training is commodity hardware and open-source software (OSS). The storage hardware required is known as JBOD (just a bunch of disks) in the industry. And it doesn’t require RAID controllers, fiber channel HBAs, or any special hardware, all that is done at the software level. The OSS powering the JBOD system and connecting it to applications is nothing short of spectacular, much better than proprietary systems, which vendors will disagree with. 

Ceph, GlusterFS, OpenZFS, and Lustre are powerful OSS products that incorporate features for scaling out, preventing data corruption, guaranteeing consistency, and much more. Although there are other OSS storage projects out there, these here are well known and are being used in production in tens of thousands of compute-heavy environments around the world. They’ve been around for a decade-plus, and the global community has contributed vastly to these different projects. The hard part is picking the proper one or two. More on that later. Let’s discuss hardware.       

Commodity JBOD

JBOD systems are easy to build. They can be purchased as a bare-bones system with the motherboard, CPU, RAM, and one drive or as an entire system. If a bare-bones system is purchased, the builder needs to ensure all required components are there, which vary depending on whether PCIe NVMe is used or SAS drives. 

There are dozens of manufacturers, if not hundreds that sell JBOD systems. Below is a small sample of prices for JBOD systems that come with all the required components. Notice how the 1.69PB JBOD price is almost the same as the Dell Powervault with 336TB.    

  • Thinkmate 4U 106x16TB SAS drives (1.69PB) 2000W: $51,027
  • Thinkmate 4U 106x18TB SAS drives (1.91PB) 2000W: $56,532
  • RackmountPro 4U 60x8TB SAS drives (480TB) 2000W: $25,356

Next, let’s price out a bare-bones JBOD system and its components to give us an idea around pricing.

192TB JBOD with 2TB RAM

  • Supermicro Server (chassis) 2113S-WN24RT: $3,033 (Ace Micro)
  • Mellanox NIC MCX515A 100GbE: $749
  • Western Digital U.2 NVME SSD 8TB: $999/ea. x 24 drives = $23,976 
  • Samsung M393AAG40M32-CAE 128GB (need 16) SDRAM DDR4: $960/ea. x 16 = $15,360  

The Supermicro JBOD is a bare-bones system with chassis, motherboard, 1 AMD EPYC 7001 CPU, redundant power supplies, and 24 hot swap bays for NVMe SSDs. Yes, NVMe SSDs (U.2) are hot-swappable. If the server is filled with 24x8TB SSDs, that’s a whopping 192TB of NVMe storage and 2TB of RAM. Disclaimer: we’re not sure these parts are compatible with each other. 

Scroll to Top