MENU

Samsung’s Q&A on how data centres can save money with 3D memory

Samsung’s Q&A on how data centres can save money with 3D memory

Feature articles |
By Julien Happich



Q: Where does Samsung see itself as a DRAM and SSD manufacturer in the data centre context?

A: Data centres don’t have it easy these days. The space available for servers in data centres is finite, the power consumption permitted is capped and the demand for the volumes of data that can be processed daily is soaring. The aim here is to maintain a balance between the need for more capacity and the increased demand for performance at an acceptable price. We regard ourselves as being on the innovative side of our industry, and new Samsung 3D technologies help our customers in the storage sector especially to achieve this difficult balance more easily.

 

Q: Where do the challenges for coping with the demands on data centres lie?

A: Big data is driving a lot of things forward, and this is also having an impact on working memory. Enormous volumes of data are being produced everywhere, including in the private domain, for example at events. Everything is being photographed and filmed nowadays. From your smartphone, your snaps are uploaded to familiar platforms and then they get shared on from there.

Big data certainly doesn’t just mean financial risk modelling in banks, it also means perfectly normal things, things that we use every day. These are services and business models that are made possible through these volumes of data and quick access to the data, whether it be Facebook, Instagram, Amazon or Uber. The list is endless, and up-and-coming IoT applications will multiply these demands yet again. The result is that there is a tremendous demand for high-performance storage capacity. Over the next three years, market researchers anticipate almost a doubling of demand, especially in the field of mobiles and SSDs. For NAND, for example, iSuppli cites an annual market growth of 42%.

The memory/storage subsystem has very clearly moved from an “also-ran” position to pole position in the data centre, even though not every user might be aware of it today. And while we’re on the subject of Formula 1, without big data and real-time computing, it wouldn’t even be possible, because the drivers are controlling racing cars that are packed full of sensor technology that is constantly communicating with high-powered computers in the racing stable to optimise performance and tactics in the race. And let’s not forget the design phase for these cars, which is nowadays dominated by CFD (computational fluid dynamics) and virtual wind tunnels.

 

Q: What impacts is this having on storage systems, and in particular on disk storage?

A: The hard drive has a long way to go yet. Even if all of the storage manufacturers put their production capacities for NAND flash together, it wouldn’t be enough to cover total demand. What we are seeing, however, is that tier 0 and tier-one storage are moving towards flash, and even tier-two is already being discussed. This trend is extremely clear. We’re also seeing it in sales figures. In 2015, we sold three times as many SSDs as we did in the previous year. And after the first quarter, it’s already evident that this trend is continuing.


Q: Does it really make a difference in servers which type of flash or DRAM is used?

A: Yes, because ultimately it’s about increasingly powerful chips. And then there are four important main points: the increase in storage density to get more capacity from the same area. Of course this is associated with faster speed, while at the same time reducing power consumption and ensuring a service life that is as long as possible with excellent reliability.

 

Q: And how is this noticeable now with DRAM working memory?

A: Although you may not believe it, even modern memory modules have potential for savings. Let’s take a server scenario with 96 GB of DDR4 RAM (8 GB, 20 nm, 1.2 V) which, in practice, would be used at the lower end. A small- to medium-sized data centre might operate more than 1,000 of these. With modern process technology in the 20-nanometre range and high-storage density, the power element of the overall running costs can be reduced by 150,000 to 200,000 kW. It therefore quickly makes business sense for companies to invest in the latest technology to truly make the very best use of the potential savings offered by virtualisation.

 

Q: Whats so special about TSV technology?

A: Naturally, we expect storage density in servers to grow significantly. Right here is where we regard the transition from Haswell to Broadwell as an opportunity to significantly expand server capacity. This in turn is essential for increasingly important in-memory computing, such as SAP HANA. Two or three years ago, we were still talking to customers about proof of concepts (PoC) or proof of technology (PoT) with one to two TB of DRAM per server. We’ve now reached 12 to 17 TB per server. This increase in capacity has taken place primarily on the memory module, because slots on the mainboards haven’t increased exponentially.

Since increasing the capacity of the basic components is not a straightforward task, DRAM can happily be stacked, which means that several chips are positioned on top of each other inside a package and then connected to each other.

With TSV 3D technology, we are now achieving memories of 128 GB on a single DRAM module. At the same time, we are circumventing the limits of conventional wire bonding architecture. Until now, stacked chips have been linked with wires and additional controllers. This increased both complexity and power consumption. However, the addressable bandwidth and therefore the speed are now limited, because the many controllers can no longer be addressed any faster.

TSV DRAMs (through silicon via), on the other hand, are linked to each other via several thousand TSVs in a package, just like lifts in a skyscraper, and are now achieving speeds of 2,400 Mbit/s, which can be increased even further in future generations. This does away with wire bonding, and communication with the memory controller is no longer performed by every DRAM chip, but instead by a single master chip. Overall, this also has the effect that energy consumption is reduced again by almost 25 per cent.

With the next generation in the 10-nanometre process, we’ll achieve 3,400 Mbit/s. We should be seeing the first products from this generation in two years’ time.


Q: As we have already mentioned, SSDs are increasingly being used in storage systems. 3D architecture is becoming commonplace here too. What are the features of this V-NAND technology, and how is it different from conventional flash storage?

A: With the planar NANDs that have been used in flash and SSDs in the past, it was almost impossible to achieve higher data density. The memory cells, arranged horizontally, are being brought closer and closer together to do this. And this is increasing the risk of cells influencing each other and data being changed or lost.

So we use 3D V-NAND technology. Unlike planar, two-dimensional NANDs, the memory cells are arranged on top of each other and connected to each other vertically. The data layer, which was previously made from conductive material, is replaced in V-NANDs by an insulator. The vertical arrangement of the cells means there is a physically wider bit line which directly contributes to suppressing interference between the cells.

The increase in data density from two to three bits creates the opportunity to accommodate more storage capacity on a single chip. In the data centre, we are currently working with everyday capacities of up to four TB, and in the client sector we offer affordable SSDs with up to two TB. It should also be mentioned that, despite the improved performance and additional features such as power savings, SSDs do not cost more.

 

Q: What role does NVMe have to play and for which application scenarios are Enterprise SSDs recommended?

A: NVMe is ultimately the interface we’ve all been waiting for. It allows SSDs to realise their performance capacity much more effectively. Take this analogy: what use is the best sports car if it’s being pulled along by a donkey? With NVMe, you can put the donkey out to pasture and let the sports car really do its thing.

Unlike AHCI, which SATA has used so far, NVMe processes multiple commands simultaneously, achieving four- to five-times better performance than SATA SSDs. In practice, transfer speeds of 1.5 GB/s can be achieved when writing sequential data and 2.5 GB/s when reading. Workloads can be processed with up to 300,000 IOPS. Conventional SATA III SSDs, on the other hand, only support data rates of a maximum of 600 MB/s and 100,000 IOPS. Our flagship model, the PM1725, offers a capacity of up to 6.4 TB, a bandwidth of 3 GB per second and 1 million IOPS – which means that a huge workload can be buffered on the server side before any data needs to be exchanged with the storage network.

In principle, all applications will benefit from these speed advantages. Initially, SSDs were used as boot drives instead of HDDs, and later as cache between the server and storage. Now, servers are interacting with complete all-flash arrays and with tiered storage systems with integrated SSDs.


Q: How do server SSDs differ from consumer drives?

A: Different workloads demand differently designed SSDs. On the OEM side, these are our models for clients, single workstations and laptops, which range from simple hard drive replacements to performance workstation solutions. With server SSDs, the requirements profile primarily depends on the application’s write intensity. The mainstream mostly needs drives for read-intensive applications. This means 90% reading and ten% writing. In practice, these include file and web servers, video-on-demand solutions or content distribution networks. On the other side is the mixed workload, such as for database and application servers and in the cloud service environment right through to storage systems.

In both cases, we offer SSDs for the three interfaces that currently exist on the market: SATA, SAS and increasingly PCIe with support for the NVMe protocol. These differ also in terms of writes per day, capacity and total bytes written in the sense of their service life.

When it comes to service life especially, IT managers should understand their needs, because it is like using a pre-paid card. If I normally spend 50 euros a month making calls, I should buy myself a 60-euro card, because that way I have a little spare. I shouldn’t buy one for 20 euros and then wonder 10 days later why the card balance is zero. I have to say though, that this awareness has now become fairly widespread, as has the fact that you should use SSDs that are designed for servers. These come with firmware that has the capacity to talk to RAIDs in various versions. A consumer SSD can’t do this. If the consumer SSD is busy with garbage collection, for example, and doesn’t contact the controller, then the controller thinks the drive is faulty.

The issue of sustained write performance is also important. At the start of a drive’s service life, you always get high bandwidths and many IOPS. As the drive fills up, the write speed can drop from its previous level of 90k to 1k. Our Enterprise SSDs offer sustained performance of at least 15k write IOPS. Our firmware also allows over-provisioning (default 7%) to be adjusted. If you don’t need the full capacity and instead of 480 you can manage with 360 GB, then the service life of the SSD doubles and the sustained write rate increases to around 29k — over the guaranteed service life of five years.


Q: What developments can we expect to see in the future? DRAMs are faster storage devices, but they’re more volatile. Flash, on the other hand, is persistent, but not as fast. Will there be any coming together of these two technologies?

A: Yes, we expect so. But these are tomorrow’s technologies that are still in the early stages of laboratory development. We are constantly researching all kinds of technologies. But there’s nothing specific that we can discuss yet, and don’t expect any definite announcements over the next two or three years.

Nevertheless, we are seeing a demand for the delimitation of memory and storage and a need to bridge the fall in latency by a factor of 1,000. We are turning to NVDIMM-P non-volatile memory for this. Compared to NVDIMM-F and NVDIMM-N, the P approach has the advantage that the memory can be addressed as both bytes or as blocks. Access to stored data will operate in a latency range equivalent to DRAM. This is especially interesting for in-memory computing. With this new technology, fast data flashing in a storage medium is available that is persistent, but which sits on the fast memory channel and can be recovered again in a fraction of the time if needed.

In addition, JEDEC specifications already exist for non-volatile DIMMs. We are working on this with Netlist as our partner, who has already made considerable progress in this field. Our ambition here is to offer more capacity than others. We also realise that customers prefer open standards. As a result, we are deliberately using JEDEC and not proprietary interfaces like other suppliers on the market. With our NVDIMMs there is no vendor lock-in, and these memory modules will be compatible with any server.

We expect to be able to provide OEMs with the first NVDIMM samples in the second half of the year. The channel will certainly then have to wait another two or three more quarters, but the issue is an up-and-coming one.

 

About the author:

Thomas Arenz is director of marcom and strategic business development at Samsung Semiconductor Europe – www.samsung.com/semiconductor/

 

Related articles:

First Look at Samsung’s 48L 3D V-NAND Flash

TechInsights: Inside 1X nm Planar NAND

NAND, DRAM 3D-transition roadmaps

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s