Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia Speaks On Performance Issue

December 5, 2012 by  
Filed under Computing

Comments Off on nVidia Speaks On Performance Issue

Nvidia has said that most of the outlandish performance increase figures touted by GPGPU vendors was down to poor original code rather than sheer brute force computing power provided by GPUs.

Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia’s Tesla business said that such figures were generally down to starting with unoptimized CPU code.

During Intel’s Xeon Phi pre-launch press conference call, the firm cast doubt on some of the orders of magnitude speed up claims that had been bandied about for years. Now Gupta told The INQUIRER that while those large speed ups did happen, it was possible because of poorly optimized code to begin with, thus the bar was set very low.

Gupta said, “Most of the time when you saw the 100x, 200x and larger numbers those came from universities. Nvidia may have taken university work and shown it and it has an 100x on it, but really most of those gains came from academic work. Typically we find when you investigate why someone got 100x [speed up] is because they didn’t have good CPU code to begin with. When you investigate why they didn’t have good CPU code you find that typically they are domain scientist’s not computer science guys – biologists, chemists, physics – and they wrote some C code and it wasn’t good on the CPU. It turns out most of those people find it easier to code in CUDA C or CUDA Fortran than they do to use MPI or Pthreads to go to multi-core CPUs, so CUDA programming for a GPU is easier than multi-core CPU programming.”

Source…

The First PC Had a Birthday

August 20, 2011 by  
Filed under Computing

Comments Off on The First PC Had a Birthday

The year was 1981 and IBM introduced its IBM PC model 5150 on August 12th, 30 years ago today.

The first IBM PC wasn’t much by today’s standards. It had an Intel 8088 processor that ran at the blazing speed of 4.77MHz. The base memory configuration was all of 16kB expandable all the way up to 256kB, and it had two 5-1/4in, 160kB capacity floppy disk drives but no hard drive.

A keyboard and 12in monochrome monitor were included, with a colour monitor optional. The 5150 ran IBM BASIC in ROM and came with a PC-DOS boot diskette put out by a previously unknown startup software company based out of Seattle named Microsoft.

IBM priced its initial IBM PC at a whopping $1,565, and that was a relatively steep price in those days, worth about $5,000 today, give or take a few hundred dollars. In the US in 1981 that was about the cost of a decent used car.

Because the IBM PC was meant to be sold to the general public but IBM didn’t have any retail stores, the company sold it through US catalogue retailer Sears & Roebuck stores.

Subsequently IBM released follow-on models through 1986 including the PC/XT, the first with an internal hard drive; the PC/AT with an 80286 chip running at 6MHz then 8MHz; the 6MHz XT/286 with zero wait-state memory that was actually faster than the 8MHz PC/AT and (not very) Portable and Convertible models; as well as the ill-fated XT/370, AT/370, 3270 PC and 3270/AT mainframe terminal emulators, plus the unsuccessful PC Jr.

Read More….

IBM Debuts Fast Storage System

July 30, 2011 by  
Filed under Computing

Comments Off on IBM Debuts Fast Storage System

IBM

With an eye toward helping tomorrow’s data intensive organizations, IBM researchers have developed a super-fast storage system capable of scanning in 10 billion files in 43 minutes.

This system easily bested their previous system, demonstrated at Supercomputing 2007, which scanned 1 billion files in three hours.

Key to the increased performance was the use of speedy flash memory to store the metadata that the storage system uses to locate requested information. Traditionally, metadata repositories reside on disk, access to which slows operations.

“If we have that data on very fast storage, then we can do those operations much more quickly,” said Bruce Hillsberg, director of storage systems at IBM Research Almaden, where the cluster was built. “Being able to use solid-state storage for metadata operations really allows us to do some of these management tasks more quickly than we could ever do if it was all on disk.”

IBM foresees that its customers will be grappling with a lot more information in the years to come.

“As customers have to store and process large amounts of data for large periods of time, they will need efficient ways of managing that data,” Hillsberg said.

For the new demonstration, IBM built a cluster of 10 eight-core servers equipped with a total of 6.8 terabytes of solid-state memory. IBM used four 3205 solid-state Storage Systems from Violin Memory. The resulting system was able to read files at a rate of almost 5 GB/s (gigabytes per second).

Read More….

US Seeks To Regain Supercomputer Title

March 25, 2011 by  
Filed under Computing

Comments Off on US Seeks To Regain Supercomputer Title

Last year the US lost the Super Computing crown to China which was assisted by a US corporation.  Hating to be beat the US is not seeking to wion the crown back under a project being called Titan.

On the campus of Oak Ridge National Laboratory (ORNL) in Oak Ridge Tennessee we have gotten word that “Titan” has been commissioned by the US Department of Energy.  The Supercomputer is expected to achieve speeds of 20 petaflops per second.

Read More….

« Previous Page