Presented by Pure Storage
Net carbon neutrality is a priority for organizations worldwide, but we’re still a long way from achieving it. Using less energy in the first place can help us all get closer — especially when it comes to energy-hungry data centers faced with increasingly heavy workloads in the age of machine learning and AI.
Data centers account for around 1% of global electricity consumption. While energy efficiency has helped limit that growth despite the surge in demand for data services, the International Energy Agency cites significant risk that growing demand for resource-intensive applications like AI in particular will begin to outpace the gains of recent years.
In PwC’s annual survey, meanwhile, just 22% of CEOs said they have made net-zero commitments, with another 29% working toward making one and the largest companies the furthest along. No matter where your organization lies on that spectrum, both the expense and environmental cost of energy cannot be ignored when it comes to IT resource consumption.
At Pure Storage, our focus is to enable customers to make smarter decisions with data. An essential component of that work is helping them assess how they can achieve a desired business outcome without burning through cash — or wattage — to fuel data centers full of the IT equipment they employ to make those smarter decisions.
The performance and efficiency equilibrium
With that in mind, let’s consider the critical role of energy efficiency in computing, from consumer devices to the data center.
Modern mobile processors have adopted a mix of performance- and efficiency-optimized compute cores and specialized neural-processing engines for machine-learning applications. These heterogeneous devices enable smartphones and laptops to deliver much longer battery life without sacrificing computing performance. As laptop and smartphone users, we only see the outcome (fast, but long battery life). But the key to that energy proportionality is the combination of specialized hardware capabilities and software intelligently using different resources at just the right time based on workload.
Modern datacenter facilities have analogous capabilities for heterogeneous computation, with a mix of traditional server central processing units (CPUs) and more workload-specific accelerators (graphics processing units and the like). In a datacenter, massive-scale schedulers like Kubernetes can help place workloads guided by a few policies from application developers and operations teams.
Whether for battery-optimized mobile devices or hyperscale datacenters, heterogeneous computing is critical to delivering energy efficiency and proportionality. And the key to creating usable systems from heterogeneous computing devices is an operating system, abstracting the disparate hardware architectures underneath.
An efficient operating system for storage
But what about storage in the datacenter? Historically, system architects would bring in completely different storage platforms to serve different performance and capacity requirements, and then be left with an unruly mix of infrastructure silos, each requiring dedicated, often manual work to manage workload placement and scheduling. That’s both inefficient and wasteful.
We now have the technology — from quad-level cell flash to storage-class memories — to build very different design points in the performance and capacity space under a single architecture family, as Pure has done. Power efficiency was a factor for Meta, for example, as it chose to deploy Pure in its new AI Research SuperCluster (RSC), which Meta believes will be the fastest AI supercomputer in the world.
For many organizations, complex and rigid legacy infrastructure technology poses a barrier to achieving the new era of intelligence that is the promise of AI. To aid in overcoming those barriers, Pure’s newest family of products makes a generational leap in both power efficiency and performance with a unique combination of hardware and software innovation that reduces power consumption and overall data center footprint.
With a modular architecture and separation of storage computing resources and capacity, storage can be upgraded flexibly and non-disruptively — delivering a highly configurable and customizable platform to target a wide set of modern workloads efficiently.
Envisioning a datacenter-scale operating system for storage management
Separation of storage platforms provides more flexibility in creating highly efficient IT infrastructures. It not only enables administrators to assemble the right balance of IT resources for a given workload to minimize costs, it also allows them to perform upgrades on different resources independently as needed.
Pure’s new platform evolves over time in alignment with customer requirements and leverages nearly unlimited scalable metadata architecture, offering more than double the density, performance and power efficiency of previous versions. It also helps end users deliver on sustainability mandates with better performance on key metrics such as capacity per watt, bandwidth per watt, and capacity per rack-unit, resulting in an overall smaller data center footprint.
But where is the datacenter-scale operating system for storage management? Today, manual data management is required to improve efficiency for workloads stuck in suboptimal infrastructure silos, slowing progress towards sustainability, cost and agility goals.
Delivering storage as code solves that puzzle. Pure Fusion, Pure’s autonomous storage model, enables users to achieve better outcomes through automation by embracing the agility and scalability of the cloud computing model. Intelligent workload management continually optimizes storage pools by rebalancing workloads on the fly.
At Pure, we take pride in driving innovation through better science — so that customers can build smarter and automatic policies in the data center to improve energy efficiency at massive scale. And that’s good for end users, the bottom line and the planet.
Brian Gold is Engineering Director at Pure Storage.
Source: Read Full Article