A Custom Controller Native PCIe SSD in 350
SSDs are beginning to challenge conventional drive form factors in a major way. On the consumer side we seeing more systems use new form factors for SSDs, enabled by mSATA. The gumstick form factor used in the MacBook Air and ASUS UX Series comes to mind. SSDs can offer performance in a smaller package, thus helping scale down the size of notebooks.
The 11 inch MacBook Air SSD, courtesy of iFixit
The enterprise market has seen a form factor transition of its own. While 2.5″ SSDs are still immensely common, there a lot of interest in PCIe solutions.
The quick and easy way to get a PCIe SSD is to take a bunch of SSDs and RAID them together on a single PCIe card. You don really get a performance benefit, but it does help you get a lot of performance without being drive bay limited. This is what we typically see from companies like OCZ.
The other alternative is a native PCIe solution. In the aforementioned example, you typically have a couple of SATA SSD controllers paired with a SATA to PCIe RAID controller. With a native solution you skip the RAID controller entirely and just have a custom SSD controller that interfaces directly to PCIe. A native PCIe SSD is just an SSD that avoids SATA entirely, thus avoiding any potential bottlenecks. Today Micron is announcing its first native PCIe SSD: the P320h.
The P320h is Micron first PCIe SSD as well as its first in house controller design. You remember from our C300/C400/m4 reviews that Micron typically buys its controllers from Marvell and simply does firmware development in house. The P320h changes that. While it too early to assume that we see Micron designed controllers for consumer drives as well, clearly that a step the company is willing to take.
The P320h controller is a beast. With 32 parallel channels and a PCIe gen 2 x8 interface, the P320h is built for bandwidth. Micron peak performance specs speak for themselves:
Sequential read/write performance is up to 3GB/s and 2GB/s respectively. Random 4KB read performance is up at a staggering 750,000 IOPS, while random write speed peaks at 341,000 IOPS. The former is unmatched by anything I seen on a single card, while the latter is a number that OCZ recently announced Z Drive R4 88 is promising as well. Note that these aren steady state numbers nor are the details of the testing methodology known so believe accordingly.
There is of course support for NAND redundancy, which Micron calls RAIN (Redundant Array of Independent NAND). Micron calls RAIN very similar to RAID 7 with 1 parity channel, however it didn release information as to what sorts of failures are recoverable as a result. RAIN in addition to typical enterprise level write amplification concerns result in a some pretty heavy overprovisioning on the drive as you see below.
Micron will offer the P320h in two capacities: 350GB and 700GB. The drives use 16Gbit 34nm SLC NAND (ONFI 2.1). The 700GB drive features 64 package placements with 8 die per package that works out to be 16GB per die, or 1TB of NAND on the card.
The 350GB version has the same number of package placements (64) but it only has 4 die per package, which works out to be 512GB of NAND on board. Obviously with twice as many die per package there are some interleaving benefits which result in better 4KB random write performance.wholesale jerseys
Pricing is unknown at this point, although Micron pointed out that it is expecting cost to be somewhere south of $16 per GB (at $16/GB that would be $5600 for the 350GB board and $11,200 for the 700GB board).
GullLars Thursday, June 02, 2011 link
This is what i also thought of when i read “Sequential read/write performance is up to 3GB/s and 2GB/s respectively. Random 4KB read performance is up at a staggering 750,000 IOPS, while random write speed peaks at 341,000 IOPS. The former is unmatched by anything I’ve seen on a single card, while the latter is a number that OCZ’s recently announced Z Drive R4 88 is promising as well. Note that these aren’t steady state numbers nor are the details of the testing methodology known so believe accordingly.”
ioDrives have been the benchmark to beat, and it doesn’t seem like P320h can beat it. ioDrives also scale from 1 to 8 controllers on the same card in a cluster type setup, and come with support for Infiniband for direct linking outside the host system.
TMS has both PCIe Flash and RAM sollutions that can beat this.
The question is really, will this be able to compete in it’s price range on a TCO vs QOS basis. At least for the range that this card is marketing. If you read the specs a little more close you will see that the IO fusion drive is a 150 watt,double width full length card. P320h says 25w Max in this review. That’s a huge difference.
The octal is about $5000 more according to a quick Google search. So this is really an apples to oranges comparison.
Watt for watt the p320h wins hands down. It is also a single wide card. I suspect this will be very desirable for enterprise/server market. I also saw on the register that they have a hhhl card too.
vol7ron Thursday, June 02, 2011 link
You guys are looking at today and not the big picture, down the road impact. Don’t be so short sighted and nit pick my word choice it’s a comment, not an article
Sure it’s SLCs today, but it’ll be MLCs tomorrow perhaps with 3D engineered hardware. Down the road, they might not use passive cooling, but still the PCIe slots are near where a lot of the heat is generated, so even without the dedicated card, I’m curious about the longterm effects.
Back to your response: Gaming systems tend to be used for more than just gaming, at least for the majority of gamers their machines are more high end all purpose systems. Sure, dedicated pro gramers have their specific setups and boxes used solely, for gaming, but I was generalizing. Teens, and amateur gamers are not going to spend the $$ on multiple computers, one to crunch numbers and one to play online. That being said, 4GB is not enough on a 64b system, with background virus/anti cheat running, while recording demos. And if it isn’t a serious gamer, they’ll be having their desktop widgets and internet browsers open, possibly even streaming TV if they have a decent system.
To counter your point, for those enterprise systems, it’d be more cost effective to use the non PCI alternatives. And if it is an enterprise system, I question the number of PCIe slots available that would allow for RAIDing, or even if that’s possible using this setup, as it might be software driven.