Artificial Intelligence Could Help Data Centers Run Far More Efficiently
August 23, 2019 | MITEstimated reading time: 6 minutes
A novel system developed by MIT researchers automatically “learns” how to schedule data-processing operations across thousands of servers — a task traditionally reserved for imprecise, human-designed algorithms. Doing so could help today’s power-hungry data centers run far more efficiently.
Data centers can contain tens of thousands of servers, which constantly run data-processing tasks from developers and users. Cluster scheduling algorithms allocate the incoming tasks across the servers, in real-time, to efficiently utilize all available computing resources and get jobs done fast.
Traditionally, however, humans fine-tune those scheduling algorithms, based on some basic guidelines (policies) and various tradeoffs. They may, for instance, code the algorithm to get certain jobs done quickly or split resource equally between jobs. But workloads — meaning groups of combined tasks — come in all sizes. Therefore, it’s virtually impossible for humans to optimize their scheduling algorithms for specific workloads and, as a result, they often fall short of their true efficiency potential.
The MIT researchers instead offloaded all of the manual coding to machines. In a paper being presented at SIGCOMM, they describe a system that leverages “reinforcement learning” (RL), a trial-and-error machine-learning technique, to tailor scheduling decisions to specific workloads in specific server clusters.
To do so, they built novel RL techniques that could train on complex workloads. In training, the system tries many possible ways to allocate incoming workloads across the servers, eventually finding an optimal tradeoff in utilizing computation resources and quick processing speeds. No human intervention is required beyond a simple instruction, such as, “minimize job-completion times.”
Compared to the best handwritten scheduling algorithms, the researchers’ system completes jobs about 20 to 30 percent faster, and twice as fast during high-traffic times. Mostly, however, the system learns how to compact workloads efficiently to leave little waste. Results indicate the system could enable data centers to handle the same workload at higher speeds, using fewer resources.
“If you have a way of doing trial and error using machines, they can try different ways of scheduling jobs and automatically figure out which strategy is better than others,” says Hongzi Mao, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “That can improve the system performance automatically. And any slight improvement in utilization, even 1 percent, can save millions of dollars and a lot of energy in data centers.”
“There’s no one-size-fits-all to making scheduling decisions,” adds co-author Mohammad Alizadeh, an EECS professor and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In existing systems, these are hard-coded parameters that you have to decide up front. Our system instead learns to tune its schedule policy characteristics, depending on the data center and workload.”
Joining Mao and Alizadeh on the paper are: postdocs Malte Schwarzkopf and Shaileshh Bojja Venkatakrishnan, and graduate research assistant Zili Meng, all of CSAIL.
RL for Scheduling
Typically, data processing jobs come into data centers represented as graphs of “nodes” and “edges.” Each node represents some computation task that needs to be done, where the larger the node, the more computation power needed. The edges connecting the nodes link connected tasks together. Scheduling algorithms assign nodes to servers, based on various policies.
But traditional RL systems are not accustomed to processing such dynamic graphs. These systems use a software “agent” that makes decisions and receives a feedback signal as a reward. Essentially, it tries to maximize its rewards for any given action to learn an ideal behavior in a certain context. They can, for instance, help robots learn to perform a task like picking up an object by interacting with the environment, but that involves processing video or images through an easier set grid of pixels.
To build their RL-based scheduler, called Decima, the researchers had to develop a model that could process graph-structured jobs, and scale to a large number of jobs and servers. Their system’s “agent” is a scheduling algorithm that leverages a graph neural network, commonly used to process graph-structured data. To come up with a graph neural network suitable for scheduling, they implemented a custom component that aggregates information across paths in the graph — such as quickly estimating how much computation is needed to complete a given part of the graph. That’s important for job scheduling, because “child” (lower) nodes cannot begin executing until their “parent” (upper) nodes finish, so anticipating future work along different paths in the graph is central to making good scheduling decisions.
To train their RL system, the researchers simulated many different graph sequences that mimic workloads coming into data centers. The agent then makes decisions about how to allocate each node along the graph to each server. For each decision, a component computes a reward based on how well it did at a specific task — such as minimizing the average time it took to process a single job. The agent keeps going, improving its decisions, until it gets the highest reward possible.
Page 1 of 2
Suggested Items
Siemens’ Breakthrough Veloce CS Transforms Emulation and Prototyping with Three Novel Products
04/24/2024 | Siemens Digital Industries SoftwareSiemens Digital Industries Software launched the Veloce™ CS hardware-assisted verification and validation system. In a first for the EDA (Electronic Design Automation) industry, Veloce CS incorporates hardware emulation, enterprise prototyping and software prototyping and is built on two highly advanced integrated circuits (ICs) – Siemens’ new, purpose-built Crystal accelerator chip for emulation and the AMD Versal™ Premium VP1902 FPGA adaptive SoC (System-on-a-chip) for enterprise and software prototyping.
Taiyo Circuit Automation Installs New DP3500 into Fuba Printed Circuits, Tunisia
04/25/2024 | Taiyo Circuit AutomationTaiyo Circuit Automation are proud to be partnered with Fuba Printed Circuits, Tunisia part of the OneTech Group of companies, a leading printed circuit board manufacturer based out of Bizerte, Tunisia. on their first installation of Taiyo Circuit Automation DP3500 coater.
Vicor Power Orders Hentec Industries/RPS Automation Pulsar Solderability Testing System
04/24/2024 | Hentec Industries/RPS AutomationHentec Industries/RPS Automation, a leading manufacturer of selective soldering, lead tinning and solderability test equipment, is pleased to announce that Vicor Power has finalized the purchase of a Pulsar solderability testing system.
Lockheed Martin Successfully Transitions Long Range Discrimination Radar To The Missile Defense Agency
04/23/2024 | Lockheed MartinThe Long Range Discrimination Radar (LRDR) at Clear Space Force Station in Clear, Alaska, completed DD250 final acceptance and was officially handed over to the Missile Defense Agency in preparation for an Operational Capability Baseline (OCB) decision and final transition to the Warfighter. In addition, prior to this transition, the system has started Space Domain Awareness data collects for the United States Space Force.
US Department of Defense Selects Intel Foundry for Phase Three of RAMP-C
04/23/2024 | IntelThe U.S. Department of Defense (DoD) has awarded Intel Foundry Phase Three of its Rapid Assured Microelectronics Prototypes - Commercial (RAMP-C) program.