Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Will Facebook Go Open-Source

December 29, 2015 by  
Filed under Around The Net

Facebook has unveiled its next-generation GPU-based systems for training neural networks, Open Rack-compatible hardware code-named “Big Sur” which it plans to open source.

The social media giant’s latest machine learning system has been designed for artificial intelligence (AI) computing at a large scale, and in most part has been crafted with Nvidia hardware.

Big Sur comprises eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It makes use of Nvidia’s Tesla Accelerated Computing Platform, and as a result is twice as fast as Facebook’s previous generation rack.

“This means we can train twice as fast and explore networks twice as large,” said the firm in its engineering blog. “And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two.”

Facebook claims that as well as better performance, Big Sur is also far more versatile and efficient than the off-the-shelf solutions in its previous generation.

“While many high-performance computing systems require special cooling and other unique infrastructure to operate, we have optimised these new servers for thermal and power efficiency, allowing us to operate them even in our own free-air cooled, Open Compute standard data centres,” explained the company.

We spoke to Nvidia’s senior product manager for GPU Computing, Will Ramey, ahead of the launch, who has been working on the Big Sur project alongside Facebook for some time.

“The project is the first time that a complete computing system that is designed for machine learning and AI will be released as an open source solution,” said Ramey. “By taking the purpose-built design spec that Facebook has designed for their own machine learning apps and open sourcing them, people will benefit from and contribute to the project so it can move the entire industry forward.”

While Big Sur was built with Nvidia’s new Tesla M40 hyperscale accelerator in mind, it can actually support a wide range of PCI-e cards in what Facebook believes could make for better efficiencies in production and manufacturing to get more computational power for every penny that it invests.

“Servers can also require maintenance and hefty operational resources, so, like the other hardware in our data centres, Big Sur was designed around operational efficiency and serviceability,” Facebook said. “We’ve removed the components that don’t get used very much, and components that fail relatively frequently – such as hard drives and DIMMs – can now be removed and replaced in a few seconds.”

Perhaps the most interesting aspect of the Big Sur announcement is Facebook’s plans to open-source it and submit the design materials to the Open Compute Project. This is a bid to make it easier for AI researchers to share techniques and technologies.

“As with all hardware systems that are released into the open, it’s our hope that others will be able to work with us to improve it,” Facebook said, adding that it believes open collaboration will help foster innovation for future designs, and put us closer to building complex AI systems that will probably take over the world and kill us all.

Nvidia released its end-to-end hyperscale data centre platform last month claiming that it will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence applications.

Consisting of two accelerators, Nvidia’s latest hyperscale line aims to let researchers design new deep neural networks more quickly for the increasing number of applications they want to power with AI. It also is designed to deploy these networks across the data centre. The line also includes a suite of GPU-accelerated libraries.

Courtesy-TheInq

Comments

Comments are closed.