• Thu. Nov 28th, 2024

Panasas’s double NAS offer aims at multiple analytics workloads

Byadmin

Sep 9, 2022




Panasas has broadened its scale-out NAS offer to encompass high-performance and capacity options with general availability of ActiveStor Flash and ActiveStor Ultra XL. The two products aim at a range of workloads in terms of file size and I/O profile that fall within the high-performance computing (HPC) to artificial intelligence/machine learning (AI/ML) continuum.

Talking to ComputerWeekly.com, the company also revealed the limits of its interest in object storage, as well as its thoughts on cloud storage, where it has no presence currently.
The Panasas ActiveStor systems have been tailored to a range of workloads, which can mean file storage profiles that go from many, many very small files to a smaller number of very large ones.
ActiveStor Flash is a solely NVMe flash-based hardware appliance aimed at smaller file sizes where rapid access is required. Its ASF-100 nodes are 4U form factor and take up to 3.84TB of M.2 and 46TB of U.2 NVMe. DRAM and NVDIMM offer faster cache-level means of storing working data.
Meanwhile, ActiveStor Ultra XL is aimed at larger capacities and bigger file sizes. An ASU-100XL node runs to 160TB – but quadruple that for minimum configuration – mostly comprised of spinning disk HDD plus some faster M.2 NVMe capacity.
The two systems, both running PanFS, have benefited from controller OS and file system upgrades in version 9.2 that allow for the customer to deploy storage blades under a single namespace. “But with volumes created to suit workloads of differing I/O characteristics – so, smaller and fast, or cooler and larger – under one single pane of glass,” said Curtis Anderson, software architect at Panasas.
He added: “We were a one-platform company until May. Then we had two new platforms which are built on the ability to use multiple media types, with metadata going to NVMe for example, SSDs for small files up to 1.5MB and HDD for large files.”
The Panasas name for the functionality is Dynamic Disk Acceleration, which is the automated direction to different tiers of storage.
The reason for the shift? “The issue was, what if a customer is running HPC and wants to run another workload?” said Anderson.

The enhancements to PanFS allow for that and the engineering behind it was, said Anderson, a “moderately- sized lift” that involved refactoring PanFS to handle new hardware types and to select and qualify those products for use with the system.
But what about object storage, given that so much unstructured data – Panasas’s bread and butter – is now in object storage format?
Anderson said: “Panasas is built as a Posix file system but on top of an object store, which was developed by 1999, so before Amazon’s S3. It has the characteristics of scaling and growth, etc, that object storage has, but we don’t offer access. It works differently to S3.”
Marketing and products VP Jeff Whitaker added: “Object storage is of interest, but when it comes to how the vast majority of people access data, it’s file-based. The development side of AI/ML often happens in the cloud, however, so it’s definitely something we’re interested in as we move forward.”
In a context where the cloud is becoming increasingly important and many suppliers offer the possibility to store data in the cloud, what is the Panasas strategy here?
The company is firmly still in the on-prem hardware camp but, as with object storage, it is looking at possibilities, said Whitaker. “Right now, we are an appliance-based datacentre platform, not software-only, and from what we’ve seen in the market, 85-90% of the market is still on-prem.”
He added: “Customers struggle to get performance from cloud-based storage. Cloud providers have to throttle storage so their networks aren’t saturated. Absolutely, customers are moving to the cloud and doing more there, so we are looking at different scenarios and handling S3, with partners.”



Source link