Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

U.S. Wants To Help Supercomputer Makers

March 1, 2016 by  
Filed under Computing

Comments Off on U.S. Wants To Help Supercomputer Makers

Five of the top 12 high performance computing systems in the world are owned by U.S. national labs. But they are beyond reach, financially and technically, for many within the computing industry, even larger ones.

That’s according to U.S. Department of Energy (DOE) officials, who run the national labs. A new program aims to connect manufacturers with supercomputers and the expertise to use them.

This program provides $3 million, initially, for 10 industry projects, the DOE has announced. Whether the program extends into future fiscal years may well depend on Congress.

The projects are all designed to improve efficiency, product development and energy use.

For instance, Procter & Gamble will get help to reduce the paper pulp in products by 20%, “which could result in significant cost and energy savings” in this energy- intensive industry, according to the project description.

Another firm, ZoomEssence, which produces “powder ingredients that capture all the key sensory components of a liquid,” will work to optimize the design of a new drying method using HPC simulations, according to the award description.

Some other projects in the initial implementation of what is being called HPC4Mfg (HPC for Manufacturing) includes an effort to help Global Foundriesoptimize transistor design.

In another, the Ohio Supercomputer Center and the Edison Welding Institute will develop a welding simulation tool.

The national labs not only have the hardware; “more importantly the labs have deep expertise in using HPC to help solve complex problems,” said Donna Crawford, the associate director of computation at Lawrence Livermore National Laboratory, in a conference call. They have the applications as well, she said.

HPC can be used to design and prototype products virtually that otherwise might require physical prototypes. These systems can run simulations and visualizations to discover, for instance, new energy-efficient manufacturing methods.

Source-http://www.thegurureview.net/computing-category/u-s-wants-to-help-supercomputer-makers.html

China Using Home Servers Admidst Cyber Concerns

November 5, 2014 by  
Filed under Computing

Comments Off on China Using Home Servers Admidst Cyber Concerns

A Chinese firm has developed the country’s first homegrown servers, built entirely out of domestic technologies including a processor from local chip maker Loongson Technology.

China’s Dawning Information Industry, also known as Sugon, has developed a series of four servers using the Loongson 3B processor, the country’s state-run Xinhua News Agency reported Thursday.

“Servers are crucial applications in a country’s politics, economy, and information security. We must fully master all these technologies,” Dawning’s vice president Sha Chaoqun was quoted as saying.

The servers, including their operating systems, have all been developed from Chinese technology. The Loongson 3B processor inside them has eight cores made with a total of 1.1 billion transistors built using a 28-nanometer production process.

The Xinhua report quoted Li Guojie, a top computing researcher in the country, as saying the new servers would ensure that the security around China’s military, financial and energy sectors would no longer be in foreign control.

Dawning was contacted on Friday, but an employee declined to offer more specifics about the servers. “We don’t want to promote this product in the U.S. media,” she said. “It involves propriety intellectual property rights, and Chinese government organizations.”

News of the servers has just been among the ongoing developments in China for the country to build up its own homegrown technology. Work is being done on local mobile operating systems, supercomputing, and in chip making, with much of it government-backed. Earlier this year, China outlined a plan to make the country into a major player in the semiconductor space.

But it also comes at a time when cybersecurity has become a major concern for the Chinese government, following revelations about the U.S. government’s own secret surveillance programs. “Without cybersecurity there is no national security,” declared China’s Xi Jinping in March, as he announced plans to turn the country into an “Internet power.”

Two months later, China threatened to block companiesfrom selling IT products to the country if they failed to pass a new vetting system meant to comb out secret spying programs.

Dawning, which was founded using local government-supported research, is perhaps best known for developing some of China’s supercomputers. But it also sells server products built with Intel chips. In this year’s first quarter, it had an 8.7 percent share of China’s server market, putting it in 7th place, according to research firm IDC.

Source

Is China Hurting U.S. Vendors?

June 11, 2014 by  
Filed under Computing

Comments Off on Is China Hurting U.S. Vendors?

Shipments of servers from Chinese vendors grew at a rapid pace while the top server vendors in the U.S. declined during the first quarter of this year.

Worldwide server shipments were 2.3 million units during the first quarter, growing by just 1.4 percent compared to the same quarter last year, according to Gartner.

Growth was driven by Chinese server vendors Huawei and Inspur Electronics, which were ranked fourth and fifth, respectively, behind the declining Hewlett-Packard, Dell and IBM.

Huawei has been in the top five for server shipments for more than a year, but Inspur Electronics is a new entrant. Inspur builds blade servers, rack servers and supercomputers, and is best known for being involved in the construction of China’s Tianhe-2, which is currently the world’s fastest supercomputer, according to Top500.org.

Chinese servers partly benefitted from the 18 percent shipment growth in the Asia-Pacific region, while shipments in other regions declined, Gartner said in a statement.

Server buying trends have changed in recent years. Companies like Facebook, Google and Amazon, which buy servers by the thousands, are bypassing established server makers and purchasing hardware directly from manufacturers like Quanta and Inventec. That trend in part led to the establishment of the Open Compute Project, a Facebook-led organization that provides server reference designs so companies can design data-center hardware in-house.

Similarly, Chinese cloud providers are building mega data centers and buying servers from local vendors instead of going to the big name brands, said Patrick Moorhead, analyst with Moor Insights and Strategy.

The trend of buying locally is partly due to the security tension between the U.S. and China, but servers from Chinese companies are also cheaper, Moorhead said.

The enterprise infrastructure is also being built out in China, resulting in a big demand for servers. There is also a growing demand for servers from little-known vendors based in Asia — also known as “white box” vendors — in other regions, Moorhead said.

Source

App Stores For Supercomputers Enroute

December 13, 2013 by  
Filed under Computing

Comments Off on App Stores For Supercomputers Enroute

A major problem facing supercomputing is that the firms that could benefit most from the technology, aren’t using it. It is a dilemma.

Supercomputer-based visualization and simulation tools could allow a company to create, test and prototype products in virtual environments. Couple this virtualization capability with a 3-D printer, and a company would revolutionize its manufacturing.

But licensing fees for the software needed to simulate wind tunnels, ovens, welds and other processes are expensive, and the tools require large multicore systems and skilled engineers to use them.

One possible solution: taking an HPC process and converting it into an app.

This is how it might work: A manufacturer designing a part to reduce drag on an 18-wheel truck could upload a CAD file, plug in some parameters, hit start and let it use 128 cores of the Ohio Supercomputer Center’s (OSC) 8,500 core system. The cost would likely be anywhere from $200 to $500 for a 6,000 CPU hour run, or about 48 hours, to simulate the process and package the results up in a report.

Testing that 18-wheeler in a physical wind tunnel could cost as much $100,000.

Alan Chalker, the director of the OSC’s AweSim program, uses that example to explain what his organization is trying to do. The new group has some $6.5 million from government and private groups, including consumer products giant Procter & Gamble, to find ways to bring HPC to manufacturers via an app store.

The app store is slated to open at the end of the first quarter of next year, with one app and several tools that have been ported for the Web. The plan is to eventually spin-off AweSim into a private firm, and populate the app store with thousands of apps.

Tom Lange, director of modeling and simulation in P&G’s corporate R&D group, said he hopes that AweSim’s tools will be used for the company’s supply chain.

The software industry model is based on selling licenses, which for an HPC application can cost $50,000 a year, said Lange. That price is well out of the reach of small manufacturers interested in fixing just one problem. “What they really want is an app,” he said.

Lange said P&G has worked with supply chain partners on HPC issues, but it can be difficult because of the complexities of the relationship.

“The small supplier doesn’t want to be beholden to P&G,” said Lange. “They have an independent business and they want to be independent and they should be.”

That’s one of the reasons he likes AweSim.

AweSim will use some open source HPC tools in its apps, and are also working on agreements with major HPC software vendors to make parts of their tools available through an app.

Chalker said software vendors are interested in working with AweSim because it’s a way to get to a market that’s inaccessible today. The vendors could get some licensing fees for an app and a potential customer for larger, more expensive apps in the future.

AweSim is an outgrowth of the Blue Collar Computing initiative that started at OSC in the mid-2000s with goals similar to AweSim’s. But that program required that users purchase a lot of costly consulting work. The app store’s approach is to minimize cost, and the need for consulting help, as much as possible.

Chalker has a half dozen apps already built, including one used in the truck example. The OSC is building a software development kit to make it possible for others to build them as well. One goal is to eventually enable other supercomputing centers to provide compute capacity for the apps.

AweSim will charge users a fixed rate for CPUs, covering just the costs, and will provide consulting expertise where it is needed. Consulting fees may raise the bill for users, but Chalker said it usually wouldn’t be more than a few thousand dollars, a lot less than hiring a full-time computer scientist.

The AweSim team expects that many app users, a mechanical engineer for instance, will know enough to work with an app without the help of a computational fluid dynamics expert.

Lange says that manufacturers understand that producing domestically rather than overseas requires making products better, being innovative and not wasting resources. “You have to be committed to innovate what you make, and you have to commit to innovating how you make it,” said Lange, who sees HPC as a path to get there.

Source

NOAA Super Computer Goes Live

August 21, 2013 by  
Filed under Around The Net

Comments Off on NOAA Super Computer Goes Live

The National Oceanic and Atmospheric Administration has rolled out two new supercomputers that are expected to improve weather forecasts and perhaps help better prepare us for hurricanes.

The two IBM systems, which are identical clones, will be used by NOAA’s National Weather Service to produce forecast data that’s used in the U.S. and around the world.

One of the supercomputers is in Reston, Va.; the other is in Orlando. The NWS can switch between the two in about six minutes.

Each is a 213-teraflop system running a Linux operating system on Intel processors. The federal government is paying about $20 million a year to operate the leased systems.

“These are the systems that are the origin of all the weather forecasts you see,” said Ben Kyger, director of central operations at the National Centers for Environmental Prediction.

NOAA had previously used identical four-year-old 74-teraflop IBM supercomputers that ran on IBM’s AIX operating system and Power 6 chips.

Before it could activate the new systems, the NWS had to ensure that they produced scientifically accurate results. It had been running the old and new systems in parallel for months, comparing their output.

The NWS has a new hurricane model, which is 15% more accurate in day five of a forecast for a storm’s track and intensity. That model is now operational and running on the new systems. That’s important, because the U.S. is expecting a busy hurricane season.

Source

U.S. Takes Back Supercomputing Crown

June 27, 2012 by  
Filed under Computing

Comments Off on U.S. Takes Back Supercomputing Crown

The U.S., once again, is home to the world’s most powerful supercomputer after being kicked off the list by China two years ago and then again by Japan last year.

The top computer, an IBM system at the Department of Energy’s Lawrence Livermore National Laboratory, is capable of 16.32 sustained petaflops, according to the Top 500 list, a global, twice a year ranking, released Monday.

This system, named Sequoia, has more than 1.57 million compute cores and relies on architecture and parallelism, and not Moore’s Law, to achieve its speeds.

“We’re at the point where the processors themselves aren’t really getting any faster,” said Michael Papka, Argonne National Laboratory deputy associate director for computing, environment and life sciences.

The Argonne lab installed a similar IBM system, which ranks third on the new Top 500 list. “Moore’s Law is generally slowing down and we’re doing it (getting faster speeds) by parallelism,” Papka said.

U.S. high performance computing technology dominates the world market. IBM systems claimed five of the top ten spots in the list, and 213 systems out the 500.

Hewlett-Packard is number two, with 141 systems on the list. Nearly 75% of the systems on this list run Intel processors, and 13% use AMD chips.

Source…