Skip to main content

360 ICT, gegarandeerd betrouwbaar

Dell 14G servers are here!

Normally I wouldn’t blog about new hardware, but due to our recent adventures with S2D, I’m making an exception for the Dell R740xd;

Some things I noticed;

  • The R740 seems to have a RAID1 setup option for the flexbays.

  • But the R740xd does not have this option?
    Further on the configuration page I found“BOSS controller card + with 2 M.2 Sticks 240G (RAID 1),FH”
    So this is their solution for the RAID1 setup? I’ve never heard of the “BOSS Controller Card” but I found this document which states; PowerEdge Engineering developed a simple, cost-effective way of meeting thiscustomer need. The Boot Optimized Storage Solution uses one or two M.2 SATA devices instead of 2.5” SSD drives to house the OS, and utilizes a twoport SATA Hardware RAID controller chip to provide Hardware RAID 1 and Pass Through capabilities. The M.2 devices offer the same performance as 2.5” SSD drives, and by consolidating the SSDs and controller chip on a single PCIe adapter card, the solution frees up an additional drive slot for data needs.

    So the M.2 disks are on the controller card? Does that mean you have to poweroff the server to access a failed drive? Then I found the support video’s on the Dell support site

    So to replace a disk you do need to turn off the server, open the chassis, remove the controller and replace the add-in card. Seems to be a step back for me as I prefer to just hot-replace the disk through the flexbay.
  • Maybe we can use the “PERC H840 RAID Adapter Low Profile” to make a RAID1 setup with the flexbay drives, it looks like an internal card ment for a backplane.. I’m guessing here.
  • You can choose a 24x chassis with a 4 internal drive “Mid-Bay” and still have 4 flexbays at the back. This is really nice of you need so many disks (as we tend to do with S2D), although opening up a server to replace a disk seems to be a hassle I would like to avoid.
  • “Chassis up to 24 x 2.5” Hard Drives including 12 NVME Drives”
    So, I assumed this was the HHHL drop-in card, but they ment the NVMe cards loaded in the frontbay. They are basicly 2.5″ drives and have different specs compared to the HHHL drop-in cards. You can still only configure 4x HHHL cards (drop-in).
  • “Intel® Xeon® Platinum 8180 2.5G,28C/56T,10.4GT/s 2UPI,38M Cache,Turbo,HT (205W) DDR4-2666”
    Wow, 28 cores per CPU! It uses 205W per CPU and and a 2 CPU setup will set you back 14.000eu compared to an 8 core CPU (Intel 4108).
  • It still uses the HBA330 for S2D it seems (all the other ones are for RAID setups)
  • It still uses the Samsung PM1725 NVMe for now, but it’s supposed to be replaced by the PM1725a in 2017Q3.
  • 2000W power supply, compared to the 1100W in the R730xd. Thats probably because of the CPU’s and the GPU’s you can add. I could configure 12x NVMe with a 1100W supply, but as soon as I hit the Nvidia P4000 GPU, the warnings came up for the need of a bigger PSU.
  • “Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP rNDC” daughter board!
    This is the onboard NIC you can configure. I’ve always used the Intel 2x10Gb + 2x 1Gb daughter board, but now you can add a Mellanox card on board with dual 25GbE ports for 480eu!

More details can be found here and here. Now we all have to do is wait for 6 months for the reference design to appear..

update 27th july 2017: s2drn for the r740xd is expected half august! We’ll be needing it because of two pending (a 2 and 4 node) s2d implementations based on the r740xd.

update 6 oktober 2017: s2drn documents have been published. Also, the H840 is for external disks, BOSS is the only option for now although H330/S140 with flexbays might be possible.

Request Information

Do you want to know more about Dell 14G servers are here!? Easily request more information.

More Blog Items