• September 24, 2022

3 Big Challenges Career Coaches Face With Their Clients

What are some of the more difficult career coaching experiences you’ve had? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better …

The Remarkable Janet Yellen

The economics profession continues to struggle with gender inequality, with data showing “the gender gap in economics is the largest of any academic discipline.” So what can we learn from a …

This Is How Technology Can Be Used For Good

In which way will technology be used for good? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world. …

SNIA held its Persistent Memory and Computational Storage Summit, virtual this year, like last year. The Summit explored some of the latest developments in these topics. Let’s explore some of the insights from that virtual conference from the first day.

Dr. Yang Seok, VP of the Memory Solutions Lab at Samsung spoke about the company’s SmartSSD. He argued that computational storage devices, which off-load processing from CPUs, may reduce energy consumption and thus provide a green computing alternative. He pointed out that data center energy usage has stayed flat at about 1% since 2010 (in 2020 its was 200-250 TWh per year) due to technology innovations. Nevertheless there is a challenging milestone to reduce greenhouse emissions from data centers by 53% from 2020 to 2030.

Computational storage SSDs (CSDs) can be used for off-loading data from a CPU to free the CPU for other tasks or for local process acceleration closer to the stored data. This local processing is done with less power than a CPU, can be used for data reduction operations in the storage device (to enable higher storage capacities and more efficient storage, to avoid data movement and can be part of a virtualization environment. CSDs appear to be competitive and use less energy for IO-intensive tasks. Also the computational power of CSDs increases with the number of CSDs used.

Samsung announced its first SmartSSD in 2020 and says that its next generation will be available soon, see figure below. The next generation will allow greater customization of what the processor in the CSD can do, enabling its use in more applications and perhaps saving energy for many processing tasks.

Stephen Bates from Eidetic and Kim Malone from Intel spoke about new standard developments for NVMe computational computing. One of these additions in the Computational Programs Command set is computational namespace. This is an entity in an NVMe subsystems that is capable of executing one or more programs, may have asymmetric access to subsystem memory and may support a subset of all possible program types. The conceptual image below gives an idea how this works.

There is support for both device-defined and downloadable programs. The device-defined programs are fixed programs provided by the manufacturer or various functionality implemented by the device that can be called as programs such as compression or decryption. Downloadable programs are loaded to the Computational Programs namespace by the host.

Andy Rudoff from Intel gave an update on persistent memory. He talked about the developments along a timeline. He said, by 2019 Intel Optane Pmem was generally available. The image below shows Intel’s approach to connecting to the memory bus with Optane Pmem.

Note the direct access to memory (DAX) is the key to this use of Optane PMem. The following image shows a timeline for PMem related developments since 2012.

Andy went through several customer use cases for Intel’s Optane PMem includin

g Oracle Exadata with PMem access through RDMA, Tencent Cloud and Baidu. He also discuss future PMem directions, particularly paired with CXL. These include accelerating AI/ML and data-centric applications with temporal caching and metadata persistent storage.

Jinpyo Kim from VMware and Michael Mesnier from Intel Labs spoke on computational storage in a virtualized environment, in collaboration with MinIO. Some of the uses presented included data scrubbing (reading data and detecting any accumulated errors) of a MinIO storage stack and a Linux file system. They found that doing with this computation close to the stored data was 50% to 18X more scalable, depending upon the link speed (process is read intensive). VMware, in a research prototype with UC Irvine, did a project on near-storage log analytics and found an order of magnitude better query performance compared to using software only—this capability is being ported to a Samsung SmartSSD.

VMware also used NGD computational storage devices for running a Greenplum MPP database. They have done a lot of work on virtualizing CSD (they call it vCSD) on vSphere/vSAN, which allows sharing hardware accelerators more effectively and migrating a vCSD between compatible hosts. CSDs can be used to disaggregate cloud native apps and offload storage intensive functions. The figure below shows collaborative efforts to use CSDs between MinIO, VMware and Intel.

Chris Petersen from Meta talked about AI memory at Meta. Meta is using AI for many applications and at scale and from the data center to the edge. Since AI workloads scale so fast it requires more vertical integration from SW requirement to HW design. A considerable portion of capacity needs high BW accelerator memory but inference has the bigger portion of its capacity at low bandwidth compared to training. Also, inference has a tight latency requirement. They found that a tier of memory beyond HBM and DRAM can be leverages, particularly for inference.

They found that for software defined memory backed by SSDs that they had to use SCM SSDs (Optane SSDs). The use of faster (SCM) SSDs reduced the need to scale out and thus reduced power. The figure below shows Meta’s view of the required memory tiers for AI applications showing that there will be a need for higher latency CXL higher performance and capacity memory/storage to get optimized performance, cost and efficiency.

Arthur Sainio and Pekon Gupta from SMART Modular Technologies spoke about the use of NVDIMM-N in DDR5 and CXL-enabled applications. These NVDIMM-N’s include a battery backup that allows the DRAM on the module to be written back to flash in case of power loss. This technology is also being developed for CXL application with an NV-XMM specification of devices that have an on-board power source for back-up power and operate with the standard programming model for CXL Type-3 devices. The figure below shows the form factors for these devices.

In addition to these talks, David Eggleston from Intuitive Cognitive Consulting moderated a panel discussion with the day’s speakers and their was a Birds of a Feather session on computational storage at the end of the day moderated by Scott Shadley from NGD and Jason Molgaard from AMD.

Samsung, Intel, Eideticom, VMware, meta and SMART Modular gave insightful presentations on persistent memory and computational storage at the first day of the 2022 SNIA Persistent Memory and Computational Storage Summit.

Advertisement

Leave a Reply

Your email address will not be published.